Home > database >  How to variable json field name spark
How to variable json field name spark

Time:03-28

I have json log file(json delimiter /n) and need spark struct type but every json first field name different in my txt file how can I do it ?

val elementSchema = new StructType()
.add("name",StringType,true)
.add("object_type",StringType,true)
.add("privilege",StringType,true)

val simpleSchema = new StructType()
.add("authorization_failure",StringType,true)
.add("catalog_objects",elementSchema,true)
.add("impersonator",StringType,true)
.add("network_address",StringType,true)
.add("query_id",StringType,true)
.add("session_id",StringType,true)
.add("sql_statement",StringType,true)
.add("start_time",StringType,true)
.add("statement_type",StringType,true)
.add("status",StringType,true)
.add("user",StringType,true)

val anaSchema = new StructType()
.add("saasd",StringType,true)

val config = new SparkConf()`
config.set("spark.sql.shuffle.partitions","300")

val spark=SparkSession.builder().config(config).master("local[2]")
.appName("Example")
.getOrCreate()

val dataframe = spark.read
.json(s"/home/ogn/denemeler/big_data/impala_audit_spark/file/testa.txt")

dataframe.printSchema()

val df =dataframe.select(to_json( struct( dataframe.columns.map(col(`_`)):`_`*  ) ).alias("all"))

expecting

every field struct

authorization_failure|catalog_objects|impersonator|network_address|query_id|session_id|sql_statement|start_time|statement_type|status|user|

testa.txt the content is There are close to 3m json in a single files

{"1648039261379":{"query_id":"x","session_id":"da40931781b4b8ed:978bb8edb9177dbd","start_time":"2022-03-23 15:41:01.234826","authorization_failure":false,"status":"","user":"x","impersonator":null,"statement_type":"QUERY","network_address":"x","sql_statement":"y","catalog_objects":[{"name":"_impala_builtins","object_type":"DATABASE","privilege":"VIEW_METADATA"},{"name":"s","object_type":"TABLE","privilege":"SELECT"}]}}
{"1648039261510":{"query_id":"x","session_id":"344247956fada236:7d9c0930b7c51b9a","start_time":"2022-03-23 15:41:01.507023","authorization_failure":false,"status":"","user":"x","impersonator":null,"statement_type":"USE","network_address":"x","sql_statement":"t","catalog_objects":[{"name":"g","object_type":"DATABASE","privilege":"ANY"}]}}

CodePudding user response:

Step 1: read the Json file as simple text file using textFile:

val ds: Dataset[String] = spark.read.textFile("testa.txt")

Step 2: remove the first Json level using regexp_extract. You could also parse the json string, but I think this approach is faster.

import spark.implicits._
val ds2: Dataset[String] = ds.withColumn("value", regexp_extract('value, "\\{.*:(\\{.*\\})\\}", 1)).as[String]

Step 3: parse the strings into a dataframe:

val df3: DataFrame = spark.read.json(ds2)

df3 has now the structure

root
 |-- authorization_failure: boolean (nullable = true)
 |-- catalog_objects: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- name: string (nullable = true)
 |    |    |-- object_type: string (nullable = true)
 |    |    |-- privilege: string (nullable = true)
 |-- impersonator: string (nullable = true)
 |-- network_address: string (nullable = true)
 |-- query_id: string (nullable = true)
 |-- session_id: string (nullable = true)
 |-- sql_statement: string (nullable = true)
 |-- start_time: string (nullable = true)
 |-- statement_type: string (nullable = true)
 |-- status: string (nullable = true)
 |-- user: string (nullable = true)
  • Related