Home > Back-end >  drop column based on condition pyspark
drop column based on condition pyspark

Time:12-18

Could you drop columns based on a condition in Pyspark

The condition that I want to drop a column:

df_train.groupby().sum() == 0

Here is a quick example in pandas:

import pandas as pd
#create dataframe
df = pd.DataFrame(np.array([[0,2,1],[0,2,8],[0,6,2]]), columns=['a','b', 'c']) 

#remove columns with only zero value
df.loc[:,df.sum(axis=0) != 0 ]

If there are multiple ways, which one would be preferred?

CodePudding user response:

If I correctly understood, you want to drop all columns where the sum for that column equal to 0.

You can first calculate sum for each column, then filter the list of columns where sum = 0 and pass that list to df.drop() method:

from pyspark.sql import functions as F


df = spark.createDataFrame([(0, 1, 2), (-1, 3, -6), (1, 4, 0)], ["col1", "col2", "col3"])

sums = df.select(*[F.sum(c).alias(c) for c in df.columns]).first()

cols_to_dop = [c for c in sums.asDict() if sums[c] == 0]

df = df.drop(*cols_to_dop)

df.show()
# ---- ---- 
#|col2|col3|
# ---- ---- 
#|   1|   2|
#|   3|  -6|
#|   4|   0|
# ---- ---- 
  • Related