Home > Software design >  Pivot duplicate values into 2 different columns
Pivot duplicate values into 2 different columns

Time:05-05

I have the following data imported into PySpark dataframe:

from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
spark = SparkSession.builder.master("local[4]").appName("pyspark").getOrCreate()
df = spark.read.csv("example.csv")
df.show()

# -- ------ 
#|id|answer|
# -- ------ 
#| 1|a     |
#| 1|b     |
#| 2|c     |
#| 2|d     |
#| 3|e     |
#| 4|f     |
#| 4|g     |
# -- ------ 

Some id might have duplicated, some might not. I would like to have the following output:

 -- ------ ------ 
|id|first |second|
 -- ------ ------ 
| 1|a     |b     |
| 2|c     |d     |
| 3|e     |Null  |
| 4|f     |g     |
 -- ------ ------ 

CodePudding user response:

I would use groupBy and aggregate using collect_list. I've added array_sort just for more determinism.

from pyspark.sql import functions as F

df = df.groupBy('id').agg(F.array_sort(F.collect_list('answer')).alias('list'))
df = df.select(
    'id',
    F.element_at('list', 1).alias('first'),
    F.element_at('list', 2).alias('second'),
)
df.show()
#  --- ----- ------ 
# | id|first|second|
#  --- ----- ------ 
# |  1|    a|     b|
# |  2|    c|     d|
# |  3|    e|  null|
# |  4|    f|     g|
#  --- ----- ------ 
  • Related