I have a JSON example which I would like to flatten into a pandas DataFrame. I already used to apply some methods which I wrote myself, but I wondered if there is a better/shorter solution to this problem.
JSON example:
{
"documentName": "test1.json",
"time": "2020-10-10T08:00:00Z",
"data": [
{
"name":"john",
"scores": [
{
"event":"one",
"score":10
},
{
"event":"two",
"score":10
},
{
"event":"three",
"score":10
}
]
},
{
"name":"mary",
"scores": [
{
"event":"one",
"score":10
},
{
"event":"two",
"score":5
}
]
},
{
"name":"hope",
"scores": [
]
}
]
}
Desired output DataFrame:
index | documentName | time | name | one | two | three |
---|---|---|---|---|---|---|
0 | test1.json | 2020-10-10T08:00:00Z | john | 10 | 10 | 10 |
1 | test1.json | 2020-10-10T08:00:00Z | mary | 10 | 5 | Null |
2 | test1.json | 2020-10-10T08:00:00Z | hope | Null | Null | Null |
So the event names will be added as columns and filled accordingly. There are 4 events, but if there is a possibility to have the amount and named events checked dynamically (so not fixed), that would be a huge plus.
As for now I used the following methods:
def object_to_columns(df_row,column):
if isinstance(df_row[column], dict):
for key, value in df_row[column].items():
column_name = "{}-{}".format(column.lower(), key.lower())
df_row[column_name] = value
return df_row
def list_of_objects_to_columns(df_row,column):
if isinstance(df_row[column], list):
for item in df_row[column]:
column_name = f"{item['event']}"
df_row[column_name] = item['score']
return df_row
with open("test1.json") as file:
df = pd.read_json(file)
df = df.apply(object_to_columns, column="data", axis=1)
df = df.apply(list_of_objects_to_columns, column="data-scores", axis-1)
### CODE TO REMOVE UNUSED COLUMNS AND RENAMING ###
Ideas which are better, cleaner, faster?
CodePudding user response:
A more straightforward way is to use json_normalize
but you lost the information about 'hope':
import pandas as pd
import json
with open("data.json") as file:
data = json.load(file)
out = pd.json_normalize(data, ['data', 'scores'],
meta=['documentName', 'time', ['data', 'name']]) \
.pivot(index=['documentName', 'time', 'data.name'],
columns='event', values='score').reset_index()
Output:
>>> out
event documentName time data.name one three two
0 test1.json 2020-10-10T08:00:00Z john 10.0 10.0 10.0
1 test1.json 2020-10-10T08:00:00Z mary 10.0 NaN 5.0
Update Another option to keep 'hope' row:
with open("data.json") as file:
data = json.load(file)
out = pd.json_normalize(data, 'data', meta=['documentName', 'time']) \
.explode('scores', ignore_index=True)
out[['event', 'score']] = out.pop('scores').dropna() \
.agg(lambda x: pd.Series(x.values()))
out = out.pivot(index=['documentName', 'time', 'name'],
columns='event', values='score') \
.reset_index().drop(columns=np.NaN)
Output:
>>> out
event documentName time name one three two
0 test1.json 2020-10-10T08:00:00Z hope NaN NaN NaN
1 test1.json 2020-10-10T08:00:00Z john 10.0 10.0 10.0
2 test1.json 2020-10-10T08:00:00Z mary 10.0 NaN 5.0