Sample dataframe:
In [1898]: df = pd.DataFrame({'index':[0,0,5,5,6,6,8,8], 'table_name':['f_person', 'f_person', 'f_person', 'f_person', 'f_person', 'f_person', 'f_person', 'f_person'], 'column_name':['active', 'actv', 'ssn', 'ssn', 'pl', '
...: pl', 'prefix', 'prefix'], 'data_type':['integer', 'integer', 'varchar', 'varchar', 'varchar', 'varchar', 'varchar', 'integer'], 'default':[np.nan, np.nan, np.nan, np.nan, 10, np.nan, np.nan, np.nan], 'max_length
...: ':[np.nan, np.nan, 256, 99, 256, 256, 256, 256]})
In [1899]: df = pd.DataFrame({'index':[0,0,5,5,6,6,8,8], 'table_name':['f_person', 'f_person', 'f_person', 'f_person', 'f_person', 'f_person', 'f_person', 'f_person'], 'column_name':['active', 'actv', 'ssn', 'ssn', 'pl', '
...: pl', 'prefix', 'prefix'], 'data_type':['integer', 'integer', 'varchar', 'varchar', 'varchar', 'bigint', 'varchar', 'integer'], 'default':[np.nan, np.nan, np.nan, np.nan, 10, np.nan, np.nan, np.nan], 'max_length'
...: :[np.nan, np.nan, 256, 99, 256, 256, 256, 256]})
In [1900]: df = df.set_index('index')
In [1901]: df
Out[1901]:
table_name column_name data_type default max_length
index
0 f_person active integer NaN NaN
0 f_person actv integer NaN NaN
5 f_person ssn varchar NaN 256.0
5 f_person ssn varchar NaN 99.0
6 f_person pl varchar 10.0 256.0
6 f_person pl bigint NaN 256.0
8 f_person prefix varchar NaN 256.0
8 f_person prefix integer NaN 256.0
If you see here, the rows with common index have atleast one difference amongst them.
For ex:
Rows with index 0
, have difference in column_name
.
Rows with index 5
, have difference in max_length
.
Rows with index 6
, have differences in both data_type
and default
.
Rows with index 8
, have difference in data_type
.
Expected Output:
[
{
0: {'column_name': ['active', 'actv']},
5: {'max_length': [256, 99]},
6: {'data_type': ['varchar', 'bigint'], 'default': [10, np.nan]},
8: {'data_type': ['varchar', 'integer']}
}
]
This is part of a bigger problem. I've kind of solved it till here. Not sure how to proceed further. Any ideas?
CodePudding user response:
Don't set the index of df
and then just run this:
output = [(
df.groupby('index')
.apply(lambda data: {col: data[col].unique().tolist()
for col in data.columns
if len(data[col].unique()) > 1})
.to_dict()
)]
CodePudding user response:
Here is solution similar like @Riccardo Bucco solution with Series.nunique
and because always 2 values per groups output is converting to list only:
f = lambda x: {c: x[c].tolist() for c in x.columns if x[c].nunique(dropna=False) != 1}
d = df.groupby('index').apply(f).to_dict()
print (d)
# {0: {'column_name': ['active', 'actv']},
# 5: {'max_length': [256.0, 99.0]},
# 6: {'data_type': ['varchar', 'bigint'], 'default': [10.0, nan]},
# 8: {'data_type': ['varchar', 'integer']}}
Solution with always 2 rows per groups index values:
m = df.index.duplicated()
m1 = df.fillna('miss')[m].ne(df.fillna('miss')[~m])
s = (df.where(m1)
.stack()
.groupby(level=[0,1])
.agg(lambda x: list(x) if len(x) == 2 else [*x, np.nan]))
print (s)
index
0 column_name [active, actv]
5 max_length [256.0, 99.0]
6 data_type [varchar, bigint]
default [10.0, nan]
8 data_type [varchar, integer]
dtype: object
d = {level: s.xs(level).to_dict() for level in s.index.levels[0]}
print (d)
{0: {'column_name': ['active', 'actv']},
5: {'max_length': [256.0, 99.0]},
6: {'data_type': ['varchar', 'bigint'],
'default': [10.0, nan]},
8: {'data_type': ['varchar', 'integer']}}