I am processing a dataset which is coming in from a CSV file and there are a lot of duplicate column values, although rows do have one column which is different.
Here is an example:
Pandas(Index=457, id='ABC1', type='factory', name='ABC Factory', country='GB', machine='X6754')
Pandas(Index=458, id='ABC1', type='factory', name='ABC Factory', country='GB', machine='ZHG89')
Is it possible to compact this down into a single record in the dataframe? I want to convert this to json so ideally it would look like:
Pandas(Index=458, id='ABC1', type='factory', name='ABC Factory', country='GB', machines=['X6754', 'ZHG89'])
This post is only asking a single question which has been answered already, so in defence of keeping it open I would say if you are unable to answer a question, that is not grounds to close or delete it.
CodePudding user response:
I suppose you can use groupby
:
common_cols = ['id', 'type', 'name', 'country']
out = df.groupby(common_cols, as_index=False).agg({'machine': list})
print(out)
# Output
id type name country machine
0 ABC1 factory ABC Factory GB [X6754, ZHG89]
Setup:
>>> df
id type name country machine
457 ABC1 factory ABC Factory GB X6754
458 ABC1 factory ABC Factory GB ZHG89