I'm trying to work out a way to put a nan in columns that don't exist between 2 start/end values in another column for each row. Say I have the below dataframe:
df = pd.DataFrame({'39' : [1, np.nan, 3],
'40' : [2, 4, 5],
'41' : [3, 1, 4],
'42' : [2, 5, 2],
'43' : [1, 1, np.nan],
'start' : [39, 40, 41],
'end' : [41, 41, 43]})
39 40 41 42 43 start end
0 1.0 2 3 2 1 39 41
1 NaN 4 1 5 1 40 41
2 3.0 5 4 2 3 41 43
I want to put a nan in the numbered columns that aren't between the start/end column numbers (inclusive), to get the below:
39 40 41 42 43 start end
0 1.0 2.0 3 NaN NaN 39 41
1 NaN 4.0 1 NaN NaN 40 41
2 NaN NaN 4 2.0 3.0 41 43
The only way I can currently think of doing this would be to iterate through the rows or columns to check if between start and end or not, but I know iterating through dataframes is bad practice. I could turn the columns into lists and iterate through those and reassign, but I'm just wondering if there is a more efficient way to achieve this?
Edit: I should note that the numerical columns are week numbers, so it is possible for them to go over a year (e.g. 51, 52, 1, 2, 3 then start could be 51, and end could be 1). I'm wondering if I need to make a list of the column numbers to keep before doing this, as using < or > won't work for this case.
An example of this:
df2 = pd.DataFrame({'51' : [1, np.nan, 3],
'52' : [2, 4, 5],
'1' : [3, 1, 4],
'2' : [2, 5, 2],
'3' : [1, 1, 3],
'start' : [51, 52, 52],
'end' : [1, 2, 1]})
51 52 1 2 3 start end
0 1.0 2 3 2 1 51 1
1 NaN 4 1 5 1 52 2
2 3.0 5 4 2 3 52 1
Output:
51 52 1 2 3 start end
0 1.0 2 3 NaN NaN 51 1
1 NaN 4 1 5.0 NaN 52 2
2 NaN 5 4 NaN NaN 52 1
CodePudding user response:
We can make use of stack
and unstack
here:
df = df.set_index(["start", "end"]).stack()
idx = df.index
values = idx.get_level_values(2).astype(int)
start = idx.get_level_values(0)
end = idx.get_level_values(1)
df.where((values >= start) & (values <= end)).unstack().reset_index()
start end 39 40 41 42 43
0 39 41 1.0 2.0 3.0 NaN NaN
1 40 41 NaN 4.0 1.0 NaN NaN
2 41 43 NaN NaN 4.0 2.0 NaN
CodePudding user response:
Numpy solutionwith compare between start and end :
df.columns = df.columns[:-2].astype(int).tolist() df.columns[-2:].tolist()
s = df['start'].to_numpy()
e = df['end'].to_numpy()
cols = df.columns[:-2].to_numpy()
m = (s[:, None] <= cols) & (e[:, None] >= cols)
df.iloc[:, :-2] = df.iloc[:, :-2].where(m)
print (df)
39 40 41 42 43 start end
0 1.0 2.0 3 NaN NaN 39 41
1 NaN 4.0 1 NaN NaN 40 41
2 NaN NaN 4 2.0 NaN 41 43
EDIT: If not possible compare by value with cumulative sum, for e
from back and test if 1
in both masks:
s = df['start'].astype(str).to_numpy()
e = df['end'].astype(str).to_numpy()
cols = df.columns[:-2].to_numpy()
m1 = np.cumsum((s[:, None] == cols), axis=1) == 1
m2 = np.cumsum((e[:, None] == cols[::-1]), axis=1)[:, ::-1] == 1
m = m1 & m2
df.iloc[:, :-2] = df.iloc[:, :-2].where(m)
print (df)
51 52 1 2 3 start end
0 1.0 2 3 NaN NaN 51 1
1 NaN 4 1 5.0 NaN 52 2
2 NaN 5 4 NaN NaN 52 1
CodePudding user response:
for column in ['39', '40', '41', '42', '43']:
df[column].loc[(float(column) < df['start']) | (float(column) > df['end'])] = np.NaN
will print:
39 40 41 42 43 start end
0 1.0 2.0 3.0 NaN NaN 39 41
1 NaN 4.0 1.0 NaN NaN 40 41
2 NaN NaN 4.0 2.0 NaN 41 43