I have a dataframe, df1 with two columns representing the start and end time of task. I have another dataframe, df2 with two columns representing time and the stock available at that time. I want to create another column in df1 named as max_stock which has maximum value of stock values for a time range given by ST and ET of df1. For instance, first task has start time 7/11/2021 1:00
and end time 7/11/2021 2:00
so for this value of max_stock
is maximum of values in stock
column of df2 which is maximum of 10, 26, and 48 at time 7/11/2021 1:00
, and 7/11/2021 1:30
, and 7/11/2021 2:00
, respectively.
df1
ST ET
7/11/2021 1:00 7/11/2021 2:00
7/11/2021 2:00 7/11/2021 3:00
7/11/2021 3:00 7/11/2021 4:00
7/11/2021 4:00 7/11/2021 5:00
7/11/2021 5:00 7/11/2021 6:00
7/11/2021 6:00 7/11/2021 7:00
7/11/2021 7:00 7/11/2021 8:00
7/11/2021 8:00 7/11/2021 9:00
7/11/2021 9:00 7/11/2021 10:00
df2
Time stock
7/11/2021 1:00 10
7/11/2021 1:30 26
7/11/2021 2:00 48
7/11/2021 2:30 35
7/11/2021 3:00 32
7/11/2021 3:30 80
7/11/2021 4:00 31
7/11/2021 4:30 81
7/11/2021 5:00 65
7/11/2021 5:30 83
7/11/2021 6:00 40
7/11/2021 6:30 84
7/11/2021 7:00 41
7/11/2021 7:30 15
7/11/2021 8:00 65
7/11/2021 8:30 18
7/11/2021 9:00 80
7/11/2021 9:30 12
7/11/2021 10:00 5
Required df
ST ET max_stock
7/11/2021 1:00 7/11/2021 2:00 48.00
7/11/2021 2:00 7/11/2021 3:00 48.00
7/11/2021 3:00 7/11/2021 4:00 80.00
7/11/2021 4:00 7/11/2021 5:00 81.00
7/11/2021 5:00 7/11/2021 6:00 83.00
7/11/2021 6:00 7/11/2021 7:00 84.00
7/11/2021 7:00 7/11/2021 8:00 65.00
7/11/2021 8:00 7/11/2021 9:00 80.00
7/11/2021 9:00 7/11/2021 10:00 80.00
CodePudding user response:
One option is via conditional_join from pyjanitor to simulate greater than and less than conditions, before grouping and aggregating:
# pip install pyjanitor
import pandas as pd
import janitor
(df1.conditional_join(
df2,
('ST', 'Time', '<='),
('ET', 'Time', '>='))
.groupby(['ST', 'ET'], as_index = False)
.stock
.max()
)
ST ET stock
0 2021-07-11 01:00:00 2021-07-11 02:00:00 48
1 2021-07-11 02:00:00 2021-07-11 03:00:00 48
2 2021-07-11 03:00:00 2021-07-11 04:00:00 80
3 2021-07-11 04:00:00 2021-07-11 05:00:00 81
4 2021-07-11 05:00:00 2021-07-11 06:00:00 83
5 2021-07-11 06:00:00 2021-07-11 07:00:00 84
6 2021-07-11 07:00:00 2021-07-11 08:00:00 65
7 2021-07-11 08:00:00 2021-07-11 09:00:00 80
8 2021-07-11 09:00:00 2021-07-11 10:00:00 80
You can use a cartesian join and filter afterwards (for large dataframes, this might be memory inefficient):
(df1.merge(df2, how='cross')
.query('ST <=Time <= ET')
.groupby(['ST', 'ET'], as_index = False)
.stock
.max()
)
Out[113]:
ST ET stock
0 2021-07-11 01:00:00 2021-07-11 02:00:00 48
1 2021-07-11 02:00:00 2021-07-11 03:00:00 48
2 2021-07-11 03:00:00 2021-07-11 04:00:00 80
3 2021-07-11 04:00:00 2021-07-11 05:00:00 81
4 2021-07-11 05:00:00 2021-07-11 06:00:00 83
5 2021-07-11 06:00:00 2021-07-11 07:00:00 84
6 2021-07-11 07:00:00 2021-07-11 08:00:00 65
7 2021-07-11 08:00:00 2021-07-11 09:00:00 80
8 2021-07-11 09:00:00 2021-07-11 10:00:00 80
Another option is with interval index (a longer process here, since the resulting interval has overlapping values):
box = pd.IntervalIndex.from_arrays(df1.ST, df1.ET, closed='both')
df1.index = box
# create temporary Series
temp = (df2.Time
.apply(lambda x: box[box.get_loc(x)])
.explode(ignore_index = False)
)
temp.name = 'interval'
# lump back to main dataframe (df2)
temp = pd.concat([df2, temp], axis = 1)
# aggregate:
temp = temp.groupby('interval').stock.max()
# join back to df1 to get final output
df1.join(temp).reset_index(drop=True)
ST ET stock
0 2021-07-11 01:00:00 2021-07-11 02:00:00 48
1 2021-07-11 02:00:00 2021-07-11 03:00:00 48
2 2021-07-11 03:00:00 2021-07-11 04:00:00 80
3 2021-07-11 04:00:00 2021-07-11 05:00:00 81
4 2021-07-11 05:00:00 2021-07-11 06:00:00 83
5 2021-07-11 06:00:00 2021-07-11 07:00:00 84
6 2021-07-11 07:00:00 2021-07-11 08:00:00 65
7 2021-07-11 08:00:00 2021-07-11 09:00:00 80
8 2021-07-11 09:00:00 2021-07-11 10:00:00 80