I am looking to explode a nested json to CSV file. Looking to parse the nested json into rows and columns.
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark.sql import functions as F
from pyspark.sql import Row
df=spark.read.option("multiline","true").json("sample1.json")
df.printSchema()
root
|-- pid: struct (nullable = true)
| |-- Body: struct (nullable = true)
| | |-- Vendor: struct (nullable = true)
| | | |-- RC: struct (nullable = true)
| | | | |-- Updated_From_Date: string (nullable = true)
| | | | |-- Updated_To_Date: string (nullable = true)
| | | |-- RD: struct (nullable = true)
| | | | |-- Supplier: struct (nullable = true)
| | | | | |-- Supplier_Data: struct (nullable = true)
| | | | | | |-- Days: long (nullable = true)
| | | | | | |-- Reference: struct (nullable = true)
| | | | | | | |-- ID: array (nullable = true)
| | | | | | | | |-- element: string (containsNull = true)
| | | | | | |-- Expected: long (nullable = true)
| | | | | | |-- Payments: long (nullable = true)
| | | | | | |-- Approval: struct (nullable = true)
| | | | | | | |-- ID: array (nullable = true)
| | | | | | | | |-- element: string (containsNull = true)
| | | | | | |-- Areas_Changed: struct (nullable = true)
| | | | | | | |-- Alternate_Names: long (nullable = true)
| | | | | | | |-- Attachments: long (nullable = true)
| | | | | | | |-- Classifications: long (nullable = true)
| | | | | | | |-- Contact_Information: long (nullable = true)
My Code:
df2=(df.select(F.explode("pid").alias('pid'))
.select('pid.*')
.select(F.explode('Body').alias('Body'))
.select('Body.*')
.select((F.explode('Vendor').alias('Vendor'))
.select('Vendor.*')
.select((F.explode('RC').alias('RC'))
.select('RC.*'))))
Error: AnalysisException: cannot resolve 'explode(pid)' due to data type mismatch: input to function explode should be array or map type, not struct<Body:struct< .....
How can I parse into struct fields. any help will be much appreciated :)
CodePudding user response:
You can use explode
function only on map or array type. To access strcut type just use .
operator.
Let's say you want to get columns under RC and RD then code syntax should be as shown below.
df.select("pid.Body.Vendor.RC.*", "pid.Body.Vendor.RD.*")