Home > Blockchain >  Pyspark - hive dataframe sort by two columns breaks the dataframe
Pyspark - hive dataframe sort by two columns breaks the dataframe

Time:11-23

I have an issue with sorting a dataframe from hdfs hive context. trying to sort a dataframe of a similar structure to this:

 --- -------------- --------------- 
| id|parameter_name|parameter_value
 --- -------------- --------------- 
|id1|    name_en   |    value a
|id1|    name_il   |    value b
|id1|    address_en|    value c
|id1|    address_il|    value d
|id2|    name_il   |    value f
|id2|    name_en   |    value e
|id2|    address_il|    value h
|id1|    address_en|    value g
 --- -------------- --------------- 

I am trying to sort this dataframe in a way that the id is sorted and the parameter_name sequence in the df for each id is as follows:

name_en
name_il
address_en
address_il

note that in the example that is not the case and the names and addresses between id's are flipped.

Trying to use df.sort(["id","parameter_name"]) yields mixed results, mixing the dataframe further and splitting the id to:

id1, name_en
id1, name_il
id2, name_il
id2, name_en
id1, address_en
id1, address_il
id2, address_il
id2, address_en

CodePudding user response:

I created your dataframe but assigned random values to parameter_value so the order is not relevant anymore.

from random import random

data = [
    {"id": "id1", "parameter_name": "name_en", "parameter_value": random()},
    {"id": "id1", "parameter_name": "name_il", "parameter_value": random()},
    {"id": "id1", "parameter_name": "address_en", "parameter_value": random()},
    {"id": "id1", "parameter_name": "address_il", "parameter_value": random()},
    {"id": "id2", "parameter_name": "name_il", "parameter_value": random()},
    {"id": "id2", "parameter_name": "name_en", "parameter_value": random()},
    {"id": "id2", "parameter_name": "address_il", "parameter_value": random()},
    {"id": "id2", "parameter_name": "address_en", "parameter_value": random()},
]
df = spark.createDataFrame(data)

df.show()
 --- -------------- -------------------                                         
| id|parameter_name|    parameter_value|
 --- -------------- ------------------- 
|id1|    address_il|0.11850447351294957|
|id2|       name_en|0.18902815459657452|
|id2|    address_il|  0.294998203578158|
|id1|    address_en|0.48741740190944827|
|id2|       name_il| 0.5651073044407224|
|id2|    address_en| 0.6530661784882391|
|id1|       name_il| 0.6797674631659714|
|id1|       name_en| 0.9887386653580036|
 --- -------------- ------------------- 

then, I need to create an ordering column to maintain the artificial order you need :

from pyspark.sql import functions as F

ordering_col = (
    F.when(F.col("parameter_name") == "name_en", 1)
    .when(F.col("parameter_name") == "name_il", 2)
    .when(F.col("parameter_name") == "address_en", 3)
    .when(F.col("parameter_name") == "address_il", 4)
)

df.orderBy("id", ordering_col).show()
 --- -------------- ------------------- 
| id|parameter_name|    parameter_value|
 --- -------------- ------------------- 
|id1|       name_en| 0.9887386653580036|
|id1|       name_il| 0.6797674631659714|
|id1|    address_en|0.48741740190944827|
|id1|    address_il|0.11850447351294957|
|id2|       name_en|0.18902815459657452|
|id2|       name_il| 0.5651073044407224|
|id2|    address_en| 0.6530661784882391|
|id2|    address_il|  0.294998203578158|
 --- -------------- ------------------- 

CodePudding user response:

Simply transform the Pyspark dataframe to a Pandas dataframe and perform the sort operation. You may transform the dataframe back with sparkContext.createDataFrame(panda_df) like so:

panda_df = dataframe.toPandas().sort_values(["id", "parameter_name"], ascending=(True, False))
sorted_df = sparkContext.createDataFrame(panda_df)
sorted_df.show()
  • Related