I am stuck with the following problem: I have a pandas dataframe (df) containing values for 15 minute timesteps, like this:
value
2018-12-28 01:00:00 01:00 5
2018-12-28 01:15:00 01:00 4
2018-12-28 01:30:00 01:00 2
2018-12-28 01:45:00 01:00 1
2018-12-28 02:00:00 01:00 2
...
2021-12-07 23:45:00 01:00 4
2021-12-08 00:00:00 01:00 3
2021-12-08 00:15:00 01:00 1
2021-12-08 00:30:00 01:00 2
2021-12-08 00:45:00 01:00 2
I want to add an extra column to this dataframe showing the average of the column 'value' on a specific hour in the last week. So in other words, for the timestep '2021-12-08 00:15:00 01:00', I want this column to show the average of all values in the column 'value' at 00:15 between 2021-12-01 and 2021-12-07. What would be the most efficient way to model this?
Thanks a lot!
CodePudding user response:
This isn't the most pretty/pythonic way, but it works:
#Create your df
df=pd.DataFrame(data=[random.randint(0,5) for i in range(2880)], index=pd.date_range('2021-11-08 01:00:00', '2021-12-08 00:45:00', freq='15min'), columns=['value'])
#add extra columns, separating the index in date and time
df['time'] = df.index.time
df['date'] = df.index.date
#creating result, by slicing the dataframe based on
df['result'] = df.apply(
lambda row: df.loc[
(df.date.between(
row.date - pd.DateOffset(weeks=1), #start = 1 week back
row.date - pd.DateOffset(days=1) #end is 1 day back
)
) & (df.time == row.time) #get same time
].value.mean(), #get mean of value
axis=1)
CodePudding user response:
Here is a faster solution that works on the datetime index only:
def get_mean(x):
date_mask = (df.index >= (x.name - pd.Timedelta('7 days'))) & (df.index < (x.name)) # mask for past 7 days
past_week_data = df.loc[date_mask] # filter df by mask
times = past_week_data.at_time(x.name.time()) # filter results for matching times
return times['value'].mean() #return mean
df['mean'] = df.apply(lambda x: get_mean(x), axis=1)