Home > Back-end >  Collect rows as an array of a Spark dataframe after a group by using PySpark
Collect rows as an array of a Spark dataframe after a group by using PySpark

Time:11-09

I have the following df:

url | source | value | name
----------------------------------
a   | USA    | 1     | registry_a
a   | USA    | 1     | registry_b
a   | FRA    | 1     | registry_a
b   | DEU    | 2     | null
b   | DEU    | 1     | registry_b
b   | FRA    | 1     | registry_a
c   | ITA    | 1     | registry_a
c   | ITA    | 0     | registry_b

and I'd like to be able to group by url and source and create a new column containing all the rows of the group in an array column called data.

url | source | data 
----------------------------------------------------------------------------------
a   | USA    | [{"url": "a", "source": "USA", "value": 1, "name": "registry_a"},
    |        |  {"url": "a", "source": "USA", "value": 1, "name": "registry_b"},
    |        |  {"url": "a", "source": "FRA", "value": 1, "name": "registry_a"}] 
a   | FRA    | [{"url": "a", "source": "FRA", "value": 1, "name": "registry_a"}] 
b   | DEU    | [{"url": "b", "source": "DEU", "value": 2, "name": null},
    |        |  {"url": "b", "source": "DEU", "value": 1, "name": "registry_a"}] 
b   | FRA    | [{"url": "b", "source": "FRA", "value": 1, "name": "registry_a"}] 
c   | ITA    | [{"url": "c", "source": "ITA", "value": 1, "name": "registry_a"},
    |        |  {"url": "c", "source": "ITA", "value": 0, "name": "registry_b"}] 

I tried this but it doesn't work:

samples_to_map_df = (samples
                     .groupBy("url", "source")
                     .agg(F.map_from_arrays(F.collect_list(F.col("url")), 
                                            F.collect_list(F.col("source")),
                                            F.collect_list(F.col("value")),
                                            F.collect_list(F.col("value"))) 
                                           .alias("data"))
)

I get this error:

TypeError: map_from_arrays() takes 2 positional arguments but 4 were given

CodePudding user response:

You need to collect list of structs then use to_json function to get the desired output:

import pyspark.sql.functions as F

samples_to_map_df = samples.groupBy("url", "source").agg(
    F.to_json(
        F.collect_list(
            F.struct(*[F.col(c).alias(c) for c in samples.columns])
        )
    ).alias("data")
)

samples_to_map_df.show(truncate=False)

# --- ------ ----------------------------------------------------------------------------------------------------------------------- 
#|url|source|data                                                                                                                   |
# --- ------ ----------------------------------------------------------------------------------------------------------------------- 
#|a  |USA   |[{"url":"a","source":"USA","value":"1","name":"registry_a"},{"url":"a","source":"USA","value":"1","name":"registry_b"}]|
#|c  |ITA   |[{"url":"c","source":"ITA","value":"1","name":"registry_a"},{"url":"c","source":"ITA","value":"0","name":"registry_b"}]|
#|b  |DEU   |[{"url":"b","source":"DEU","value":"2"},{"url":"b","source":"DEU","value":"1","name":"registry_b"}]                    |
#|a  |FRA   |[{"url":"a","source":"FRA","value":"1","name":"registry_a"}]                                                           |
#|b  |FRA   |[{"url":"b","source":"FRA","value":"1","name":"registry_a"}]                                                           |
# --- ------ ----------------------------------------------------------------------------------------------------------------------- 
  • Related