I have a dataframe with huge amount of rows, and I want to conditional groupby sum to this dataframe.
This is an example of my dataframe and code:
import pandas as pd
data = {'Case': [1, 1, 1, 1, 1, 1],
'Id': [1, 1, 1, 1, 2, 2],
'Date1': ['2020-01-01', '2020-01-01', '2020-02-01', '2020-02-01', '2020-01-01', '2020-01-01'],
'Date2': ['2020-01-01', '2020-02-01', '2020-01-01', '2020-02-01', '2020-01-01', '2020-02-01'],
'Quantity': [50,100,150,20,30,35]
}
df = pd.DataFrame(data)
df['Date1'] = pd.to_datetime(df['Date1'])
df['Date2'] = pd.to_datetime(df['Date2'])
sum_list = []
for d in df['Date1'].unique():
temp = df.groupby(['Case','Id']).apply(lambda x: x[(x['Date2'] == d) & (x['Date1']<d)]['Quantity'].sum()).rename('sum').to_frame()
temp['Date'] = d
sum_list.append(temp)
output = pd.concat(sum_list, axis=0).reset_index()
When I apply this for
loop to the real dataframe, it's extremely slow. I want to find a better way to do this conditional groupby sum operation. Here are my questions:
- is
for
loop a good method to do what I need here? - are there any better ways to replace line 1 inside
for
loop; - I feel line 2 inside
for
loop is also time-consuming, how should I improve it.
Thanks for your help.
CodePudding user response:
apply
is the slow one. Avoid it as much as you can.
I tested this with your small snippet and it gives the correct answer. You need to test more thoroughly with your real data:
case = df["Case"].unique()
id_= df["Id"].unique()
d = df["Date1"].unique()
index = pd.MultiIndex.from_product([case, id_, d], names=["Case", "Id", "Date"])
# Sum only rows whose Date2 belong to a specific list of dates
# This is equivalent to `x['Date2'] == d` in your original code
cond = df["Date2"].isin(d)
tmp = df[cond].groupby(["Case", "Id", "Date1", "Date2"], as_index=False).sum()
# Select only those sums where Date1 < Date2 and sum again
# This takes care of the `x['Date1'] < d` condition
cond = tmp["Date1"] < tmp["Date2"]
output = tmp[cond].groupby(["Case", "Id", "Date2"]).sum().reindex(index, fill_value=0).reset_index()
CodePudding user response:
Another solution:
x = df.groupby(["Case", "Id", "Date1"], as_index=False).apply(
lambda x: x.loc[x["Date1"] < x["Date2"], "Quantity"].sum()
)
print(
x.pivot(index=["Case", "Id"], columns="Date1", values=None)
.fillna(0)
.melt(ignore_index=False)
.drop(columns=[None])
.reset_index()
.rename(columns={"Date1": "Date", "value":"sum"})
)
Prints:
Case Id Date sum
0 1 1 2020-01-01 100.0
1 1 2 2020-01-01 35.0
2 1 1 2020-02-01 0.0
3 1 2 2020-02-01 0.0
CodePudding user response:
One option is a double merge and a groupby:
date = pd.Series(df.Date1.unique(), name='Date')
step1 = df.merge(date, left_on = 'Date2', right_on = 'Date', how = 'outer')
step2 = step1.loc[step1.Date1 < step1.Date]
step2 = step2.groupby(['Case', 'Id', 'Date']).agg(sum=('Quantity','sum'))
(df
.loc[:, ['Case', 'Id', 'Date2']]
.drop_duplicates()
.rename(columns={'Date2':'Date'})
.merge(step2, how = 'left', on = ['Case', 'Id', 'Date'])
.fillna({'sum': 0}, downcast='infer')
)
Case Id Date sum
0 1 1 2020-01-01 0
1 1 1 2020-02-01 100
2 1 2 2020-01-01 0
3 1 2 2020-02-01 35