Home > Software design >  Pyspark combine dataframes of different length without duplicating
Pyspark combine dataframes of different length without duplicating

Time:02-11

I have these three dfs:

id | name
------------------------
1  | {"value": "bob"}
1  | {"value": "Robert"}
2  | {"value": "Mary"}
id | dob
----------------------------
1  | {"value": "21-04-1988"}
2  | {"value": null}
id | country
--------------------
1  | {"value": "IT"}
1  | {"value": "DE"}
2  | {"value": "FR"}
2  | {"value": "ES"}

And I want to combine them, but I don't want to duplicate information.

id | name                  | dob                     |country
----------------------------------------------------------------------
1  | {"value": "bob"}      | {"value": "21-04-1988"} | {"value": "IT"}
1  | {"value": "Robert"}   | Null                    | {"value": "DE"}
2  | {"value": "Mary"}     | {"value": Null}         | {"value": "FR"}
2  | Null                  | Null                    | {"value": "ES"}

I tried with a multiple outer join but it doesn't result in the above table.

name = spark.createDataFrame(
    [
        (1, {"value" : "bob"}),  # create your data here, be consistent in the types.
        (1, {"value" : "Robert"}),
        (2, {"value" : "Mary"})
    ],
    ["id", "name"]  # add your column names here
)

dob = spark.createDataFrame(
    [
        (1, {"value" : "21-04-1988"}),  # create your data here, be consistent in the types.
        (2, {"value" : None})
    ],
    ["id", "dob"]  # add your column names here
)

country = spark.createDataFrame(
    [
        (1, {"value" : "IT"}),  # create your data here, be consistent in the types.
        (1, {"value" : "DE"}),
        (2, {"value" : "FR"}),
        (2, {"value" : "ES"}),
    ],
    ["id", "country"]  # add your column names here
)


(name.join(dob, "id", "outer").join(country, "id", "outer")).show()

produces this:

id  name                dob                     country
---------------------------------------------------------------
1 | {"value":"Robert"} |{"value":"21-04-1988"}  |{"value":"DE"}
1 | {"value":"Robert"} |{"value":"21-04-1988"}  |{"value":"IT"}
1 | {"value":"bob"}    |{"value":"21-04-1988"}  |{"value":"DE"}
1 | {"value":"bob"}    |{"value":"21-04-1988"}  |{"value":"IT"}
2 | {"value":"Mary"}   |{"value":null}          |{"value":"ES"}
2 | {"value":"Mary"}   |{"value":null}          |{"value":"FR"}

Now I understand that this is exactly how a full outer join would work - but I don't need those extra duplicate information in it (I need to contain the number of rows as much as possible).

Any clue?

CodePudding user response:

You can add a column id2 to all the three dataframes using row_number() for example then use it along with id as the join condition:

from pyspark.sql import functions as F, Window

w = Window.partitionBy("id").orderBy(F.lit(None)) # change this if you have some column to use for ordering

name = name.withColumn("id2", F.row_number().over(w))
dob = dob.withColumn("id2", F.row_number().over(w))
country = country.withColumn("id2", F.row_number().over(w))

result = (name.join(dob, ["id", "rn"], "full")
          .join(country, ["id", "rn"], "full")
          .drop("rn")
          )

result.show(truncate=False)
# --- ----------------- --------------------- ------------- 
#|id |name             |dob                  |country      |
# --- ----------------- --------------------- ------------- 
#|1  |{value -> bob}   |{value -> 21-04-1988}|{value -> IT}|
#|1  |{value -> Robert}|null                 |{value -> DE}|
#|2  |{value -> Mary}  |{value -> null}      |{value -> FR}|
#|2  |null             |null                 |{value -> ES}|
# --- ----------------- --------------------- ------------- 
  • Related