I have two dfs
:
df_1
date id value
2021-01-01 A1 100
2021-01-01 A1 200
2021-01-01 A1 300
2021-01-02 A1 100
2021-01-02 A1 200
2021-01-03 A1 500
2021-01-03 A1 800
df_2
date id value_to_add
2021-01-01 A1 150
2021-01-03 A1 350
I am trying to maintain the structure of df_1
and to add the value_to_add
in the first occurrence during merge so that the end result would look like this after filling NaN
and all but the first values with a 0
:
date id value value_to_add
2021-01-01 A1 100 150
2021-01-01 A1 200 0 # 0 because the 150 have been already added
2021-01-01 A1 300 0
2021-01-02 A1 100 0 # 0 because value_to_add does not exist
2021-01-02 A1 200 0
2021-01-03 A1 500 350
2021-01-03 A1 800 0 # 0 because the 350 have been already added
My first thought was to drop duplicates of a ['date', 'id']
subset, then merge df_2
to it but then I am not sure how I would go back to the original structure of df_1
.
So the problem is the following - being able to merge on the first occurrence of keys during pd.merge
operation. I was not able to find anything on this topic and frankly not sure how I could achieve this.
CodePudding user response:
You can filter duplicated values by DataFrame.duplicated
with invert mask and Index.union
for avoid remove new columns added from merge
:
df_1.loc[~df_1.duplicated(['date', 'id']),
df_1.columns.union(df_2.columns)] = df_1.merge(df_2, how='left')
df_1 = df_1.fillna(0)
print (df_1)
date id value value_to_add
0 2021-01-01 A1 100 150.0
1 2021-01-01 A1 200 0.0
2 2021-01-01 A1 300 0.0
3 2021-01-02 A1 100 0.0
4 2021-01-02 A1 200 0.0
5 2021-01-03 A1 500 350.0
6 2021-01-03 A1 800 0.0
Another idea with helper counter column:
df_1 = df_1.assign(g = df_1.groupby(['date', 'id']).cumcount()).merge(df_2.assign(g=0), how='left')
df_1 = df_1.drop('g', 1).fillna(0)
print (df_1)
date id value value_to_add
0 2021-01-01 A1 100 150.0
1 2021-01-01 A1 200 0.0
2 2021-01-01 A1 300 0.0
3 2021-01-02 A1 100 0.0
4 2021-01-02 A1 200 0.0
5 2021-01-03 A1 500 350.0
6 2021-01-03 A1 800 0.0
CodePudding user response:
s =df_1.set_index(['date','id']).join(df_2.set_index(['date','id']))
s=s.assign(value_to_add=np.where(~s['value_to_add'].duplicated(keep='first'),s['value_to_add'],np.nan)).fillna(0)