Given dummy data frame as below:
----------------- ----------------- -------
| Start_Date | End_Date | Price |
----------------- ----------------- -------
| 01/01/2021 0:00 | 01/01/2021 0:59 | 10 |
| 01/01/2021 0:01 | 01/01/2021 0:01 | 20 |
| 01/01/2021 0:02 | 01/01/2021 0:02 | 24 |
| 01/01/2021 0:03 | 01/01/2021 0:03 | 23 |
| 01/01/2021 0:07 | 01/01/2021 0:07 | 34 |
| 01/01/2021 0:08 | 01/01/2021 0:08 | 37 |
| 01/01/2021 0:10 | 01/01/2021 0:10 | 21 |
| 01/01/2021 0:12 | 01/01/2021 0:12 | 22 |
| 01/01/2021 0:14 | 01/01/2021 0:14 | 56 |
----------------- ----------------- -------
The above data frame can be generated using code below:
data = {'Start_Date':['2021-01-01 00:00:00', '2021-01-01 00:01:00', '2021-01-01 00:02:00', '2021-01-01 00:03:00', '2021-01-01 00:07:00',
'2021-01-01 00:08:00', '2021-01-01 00:10:00', '2021-01-01 00:12:00', '2021-01-01 00:14:00'],
'End_Date':['2021-01-01 00:59:00', '2021-01-01 00:01:59', '2021-01-01 00:02:59', '2021-01-01 00:03:59', '2021-01-01 00:07:59',
'2021-01-01 00:08:59', '2021-01-01 00:10:59', '2021-01-01 00:12:59', '2021-01-01 00:14:59'],
'Avg_Price':[10, 20, 24, 23, 34, 37, 21, 22, 56]}
df1 = pd.DataFrame(data)
df1['Start_Date'] = pd.to_datetime(df1['Start_Date'])
df1['End_Date'] = pd.to_datetime(df1['End_Date'])
It can be seen that there are date ranges where data is missing. Missing ranges can be seen in data frame below:
--------------------- --------------------- -------
| Start_Date | End_Date | Price |
--------------------- --------------------- -------
| 2021-01-01 00:00:00 | 2021-01-01 00:59:00 | 10 |
| 2021-01-01 00:01:00 | 2021-01-01 00:01:59 | 20 |
| 2021-01-01 00:02:00 | 2021-01-01 00:02:59 | 24 |
| 2021-01-01 00:03:00 | 2021-01-01 00:03:59 | 23 |
| 2021-01-01 00:04:00 | NaT | NaN |
| 2021-01-01 00:05:00 | NaT | NaN |
| 2021-01-01 00:06:00 | NaT | NaN |
| 2021-01-01 00:07:00 | 2021-01-01 00:07:59 | 34 |
| 2021-01-01 00:08:00 | 2021-01-01 00:08:59 | 37 |
| 2021-01-01 00:09:00 | NaT | NaN |
| 2021-01-01 00:10:00 | 2021-01-01 00:10:59 | 21 |
| 2021-01-01 00:11:00 | NaT | NaN |
| 2021-01-01 00:12:00 | 2021-01-01 00:12:59 | 22 |
| 2021-01-01 00:13:00 | NaT | NaN |
| 2021-01-01 00:14:00 | 2021-01-01 00:14:59 | 56 |
--------------------- --------------------- -------
Above can be generated using code below:
df2 = pd.DataFrame(index=pd.date_range('2021-01-01 00:00:00', '2021-01-01 00:14:00', freq='min'))
df2 = df2.join(df1.set_index('Start_Date'))
I want list of lists to get dates ranges where data is missing.
Expected Output
result = [['2021-01-01 00:04:00','2021-01-01 00:06:00'], ['2021-01-01 00:09:00','2021-01-01 00:09:00'],
['2021-01-01 00:11:00', '2021-01-01 00:11:00'], ['2021-01-01 00:13:00','2021-01-01 00:13:00']]
What would be elegant way to achieve the desired output?
CodePudding user response:
Idea is create groups by consecutive NaN
s by Series.notna
with Series.cumsum
, filter only NaN
s by invert by ~
and aggregate by GroupBy.agg
with min
and max
, last convert to strings and then to nested list
s:
m = df2['End_Date'].notna()
L = (df2.index.to_series()
.groupby(m.cumsum()[~m])
.agg(['min','max'])
.astype(str)
.to_numpy()
.tolist())
print (L)
[['2021-01-01 00:04:00', '2021-01-01 00:06:00'],
['2021-01-01 00:09:00', '2021-01-01 00:09:00'],
['2021-01-01 00:11:00', '2021-01-01 00:11:00'],
['2021-01-01 00:13:00', '2021-01-01 00:13:00']]
CodePudding user response:
A combination of null checking, combined with np.split
and a list comprehension works on the sample data shared :
boolean = df2.End_Date.isna().cumsum()
# get position count for each null end
val = boolean[boolean.gt(0) & boolean.duplicated()].array.unique()
nulls = df2.index[df2.End_Date.isna()]
[[ent[0], ent[-1]]
for ent in np.split(nulls, val)
if ent.size > 0]
[[Timestamp('2021-01-01 00:04:00'), Timestamp('2021-01-01 00:06:00')],
[Timestamp('2021-01-01 00:09:00'), Timestamp('2021-01-01 00:09:00')],
[Timestamp('2021-01-01 00:11:00'), Timestamp('2021-01-01 00:11:00')],
[Timestamp('2021-01-01 00:13:00'), Timestamp('2021-01-01 00:13:00')]]
# if you prefer string form :
[[str(ent[0]), str(ent[-1])]
for ent in np.split(nulls, val)
if ent.size > 0]
[['2021-01-01 00:04:00', '2021-01-01 00:06:00'],
['2021-01-01 00:09:00', '2021-01-01 00:09:00'],
['2021-01-01 00:11:00', '2021-01-01 00:11:00'],
['2021-01-01 00:13:00', '2021-01-01 00:13:00']]