I have a DataFrame created from a data logger, where each point of data has it's own timestamp that looks like this:
df_orig = pd.DataFrame(
{
"val1": [ 1, np.nan, np.nan, 11, np.nan, np.nan, 21, np.nan, np.nan, ],
"val2": [ np.nan, 2, np.nan, np.nan, 12, np.nan, np.nan, 22, np.nan, ],
"val3": [ np.nan, np.nan, 3, np.nan, np.nan, 13, np.nan, np.nan, 23, ],
},
index=pd.to_datetime( [
"2021-01-01 00:00", "2021-01-01 00:00:01", "2021-01-01 00:00:02",
"2021-01-01 00:01", "2021-01-01 00:01:01", "2021-01-01 00:01:02",
"2021-01-01 00:02", "2021-01-01 00:02:01", "2021-01-01 00:02:02",
] )
)
val1 val2 val3
2021-01-01 00:00:00 1.0 NaN NaN
2021-01-01 00:00:01 NaN 2.0 NaN
2021-01-01 00:00:02 NaN NaN 3.0
2021-01-01 00:01:00 11.0 NaN NaN
2021-01-01 00:01:01 NaN 12.0 NaN
2021-01-01 00:01:02 NaN NaN 13.0
2021-01-01 00:02:00 21.0 NaN NaN
2021-01-01 00:02:01 NaN 22.0 NaN
2021-01-01 00:02:02 NaN NaN 23.0
I don't actually need the precision of when each single data point was logged. I would like to condense the DataFrame by eliminating the NaN
s and merging the lines that are very close together. the result should look like this:
val1 val2 val3
2021-01-01 00:00:00 1 2 3
2021-01-01 00:01:00 11 12 13
2021-01-01 00:02:00 21 22 23
Is there a way to do this?
CodePudding user response:
If possible simplify solution for resample per minutes with max
or min
or first
use:
df = df_orig.resample('Min').max()
print (df)
val1 val2 val3
2021-01-01 00:00:00 1.0 2.0 3.0
2021-01-01 00:01:00 11.0 12.0 13.0
2021-01-01 00:02:00 21.0 22.0 23.0