I have a pandas dataframe called df
with 500 columns and 2 million records.
I am able to drop columns that contain more than 90% of missing values.
But how can I drop in pandas the entire record if 90% or more of the columns have missing values across the whole record?
I have seen a similar post for "R" but I am coding in python at the moment.
CodePudding user response:
You can use df.dropna()
and set the thresh
parameter to the value that corresponds to 10% of your columns (the minimum number of non-NA values).
df.dropna(axis=0, thresh=50, inplace=True)
CodePudding user response:
You could use isna
mean
on axis=1
to find the percentage of NaN values for each row. Then select the rows where it's less than 0.9 (i.e. 90%) using loc
:
out = df.loc[df.isna().mean(axis=1)<0.9]