Home > Mobile >  Process JSON Data with Spark Scala
Process JSON Data with Spark Scala

Time:02-23

How to process this JSON format data in Spark Scala?

Note: For Each ID we may have more than one Party

{
    "123": {
        "Partyl": {
            "FIRSTNAMEBEN": "ABC",
            "ALIASBEN": "",
            "RELATIONSHIPTYPE": "ABC, FGHIJK LMN",
            "DATEOFBIRTH": "7/Oct/1969"
        },
        "Party2": {
            "FIRSTNAMEBEN": "ABCC",
            "ALIASBEN": "",
            "RELATIONSHIPTYPE": "ABC, FGHIJK LMN",
            "DATEOFBIRTH": "7/Oct/1969"
        }
    },
    "GeneratedTime": "2022-01-30 03:09:26"
 }

Expected Output:

 ID  FIRSTNAMEBEN ALIASBEN   RELATIONSHIPTYPE DATEOFBIRTH
 123  ABC         Null       ABC, FGHIJK LMN   7/Oct/1969
 123  ABCC        Null       ABC, FGHIJK LMN   7/Oct/1969

CodePudding user response:

You can use the below way to achieve what you want.

 val df = spark.read.option("multiline", true).json("...json")


 val df1 = df.
 select(explode(array("123.*")) as "level1").
  withColumn("ID",lit(df.columns(0))).
  withColumn("FIRSTNAMEBEN", col("level1.FIRSTNAMEBEN")).
  withColumn("ALIASBEN", col("level1.ALIASBEN")).
  withColumn("RELATIONSHIPTYPE", col("level1.RELATIONSHIPTYPE")).
  withColumn("DATEOFBIRTH", col("level1.DATEOFBIRTH")).
  drop("level1")

Output:

scala>  df1.show(false)
 --- ------------ -------- ---------------- ----------- 
|ID |FIRSTNAMEBEN|ALIASBEN|RELATIONSHIPTYPE|DATEOFBIRTH|
 --- ------------ -------- ---------------- ----------- 
|123|ABCC        |        |ABC, FGHIJK LMN |7/Oct/1969 |
|123|ABC         |        |ABC, FGHIJK LMN |7/Oct/1969 |
 --- ------------ -------- ---------------- ----------- 

CodePudding user response:

schema=StructType([StructField('123',MapType(StringType(),StructType([
StructField('ALIASBEN',StringType(),True),StructField('DATEOFBIRTH',StringType(),True),StructField('FIRSTNAMEBEN',StringType(),True),StructField('RELATIONSHIPTYPE',StringType(),True),]),True))])


df =spark.read.json(path= '/content/sample_data/new.json',multiLine=True,schema=schema)

df.select(explode('123')).select('value.*').show()

CodePudding user response:

val jsonstr = """{"123":{"Partyl":{"FIRSTNAMEBEN":"ABC","ALIASBEN":"","RELATIONSHIPTYPE":"ABC, FGHIJK LMN","DATEOFBIRTH":"7/Oct/1969"},"Party2":{"FIRSTNAMEBEN":"ABCC","ALIASBEN":"","RELATIONSHIPTYPE":"ABC, FGHIJK LMN","DATEOFBIRTH":"7/Oct/1969"}},"GeneratedTime":"2022-01-30 03:09:26"}"""

val rdd = sc.parallelize(Seq(jsonstr))

spark.read.json(rdd).selectExpr("stack(1,'123',`123`) as (ID,value)").withColumn("Party1",col("value.Partyl")).withColumn("Party2",col("value.Party2")).selectExpr("ID","stack(2,'Party1',Party1,'Party2',Party2) as (key1,value1)").select(col("ID"),col("value1.*")).show

 --- -------- ----------- ------------ ---------------- 
| ID|ALIASBEN|DATEOFBIRTH|FIRSTNAMEBEN|RELATIONSHIPTYPE|
 --- -------- ----------- ------------ ---------------- 
|123|        | 7/Oct/1969|         ABC| ABC, FGHIJK LMN|
|123|        | 7/Oct/1969|        ABCC| ABC, FGHIJK LMN|
 --- -------- ----------- ------------ ---------------- 
  • Related