Given below is a pyspark dataframe and I need to sum the row values with groupby
load_dt|org_cntry|sum(srv_curr_vo_qty_accs_mthd)|sum(srv_curr_bb_qty_accs_mthd)|sum(srv_curr_tv_qty_accs_mthd)|
------------------- --------- ------------------------------ ------------------------------ ------------------------------
|2021-12-06 00:00:00| null| NaN| NaN| NaN|
|2021-12-06 00:00:00| PANAMA| 360126.0| 214229.0| 207950.0|
condition:
1.groupby(load_dt,org_cntry)
2.sum row values (sum(srv_curr_vo_qty_accs_mthd)|sum(srv_curr_bb_qty_accs_mthd)|sum(srv_curr_tv_qty_accs_mthd)|
expected output
load_dt org_cntry total_sum
2021-12-06 Panama 782305
CodePudding user response:
simply sum( ) your result :
from pyspark.sql import functions as F
df.groupBy("load_dt", "org_cntry").agg(
(
F.sum("srv_curr_vo_qty_accs_mthd")
F.sum("srv_curr_bb_qty_accs_mthd")
F.sum("srv_curr_tv_qty_accs_mthd")
).alias("total_sum")
)
CodePudding user response:
Use Spark2.4 Higher order functions for this case.
Example:
#sample dataframe
# ------------------- --------- -------- -------- --------
#| load_dt|org_cntry| s1| s2| s3|
# ------------------- --------- -------- -------- --------
#|2021-12-06 00:00:00| PANAMA|360126.0|214229.0|207950.0|
# ------------------- --------- -------- -------- --------
#create array from sum columns then add all the array elements.
df.selectExpr("*", "AGGREGATE(array(s1,s2,s3), cast(0 as double), (x, y) -> x y) total_sum").show()
#using withColumn
df.withColumn("total_sum", expr("AGGREGATE(array(s1,s2,s3), cast(0 as double), (x, y) -> x y)")).show()
# ------------------- --------- -------- -------- -------- ---------
#| load_dt|org_cntry| s1| s2| s3|total_sum|
# ------------------- --------- -------- -------- -------- ---------
#|2021-12-06 00:00:00| PANAMA|360126.0|214229.0|207950.0| 782305.0|
# ------------------- --------- -------- -------- -------- ---------