Home > Enterprise >  printing column names that are different in a dataframe
printing column names that are different in a dataframe

Time:12-13

I have this dataframe

 ----- ------- ----------- ------------------- ----- 
|empID|Zipcode|ZipCodeType|City               |State|
 ----- ------- ----------- ------------------- ----- 
|1000 |704    |STANDARD   |PARC PARQUE        |PR   |
|1000 |704    |STANDARD   |PASEO COSTA DEL SUR|PR   |
|1001 |709    |STANDARD   |BDA SAN LUIS       |PR   |
|1001 |76166  |UNIQUE     |CINGULAR WIRELESS  |TX   |
|1002 |76177  |STANDARD   |FORT WORTH         |TX   |
|1002 |76177  |STANDARD   |FT WORTH           |TX   |
|1003 |704    |STANDARD   |URB EUGENE RICE    |PR   |
|1003 |85209  |STANDARD   |MESA               |AZ   |
|1004 |85210  |STANDARD   |MESA               |AZ   |
|1004 |32046  |STANDARD   |HILLIARD           |FL   |
 ----- ------- ----------- ------------------- ----- 

For each empID need to print the column names for which values that are different.

 ----- --------------------------------- 
|empID|nonMatchingColumnNames           |
 ----- --------------------------------- 
|1002 |City                             |
|1000 |City                             |
|1001 |State, City, ZipCodeType, Zipcode|
|1003 |State, City, Zipcode             |
|1004 |State, City, Zipcode             |
 ----- --------------------------------- 

The strategy I have taken is, build a struct and collect set all the values. Check if the count of each set is > 1, then print the column name. Here is my code

val schema = new StructType()
  .add("empID", IntegerType, true)
  .add("Zipcode", StringType, true)
  .add("ZipCodeType", StringType, true)
  .add("City", StringType, true)
  .add("State", StringType, true)
    
val idColumn = "empID"
    
val dfJSON = dfFromText.withColumn("jsonData",from_json(col("value"),schema))
  .select("jsonData.*")
    
dfJSON.printSchema()
dfJSON.show(false)
    
val aggMap = dfJSON.columns
  .filterNot(x => x == idColumn)
  .map(colName => (collect_set(colName).alias(s"${colName}_asList"), s"${colName}_asList"))
   
aggMap.foreach(println)
    
val aggMapColumns = aggMap.map(x => x._1)
    
val columnsAsList = dfJSON.groupBy(col(idColumn)).agg(aggMapColumns.head, aggMapColumns.tail : _ *)
    
columnsAsList.show(false)
    
val combinedDF = columnsAsList.select(col(idColumn), struct(
  aggMap.map(x => col(x._2)) : _ * ).alias("combined_struct")
)
    
combinedDF.printSchema()
combinedDF.show(false)
    
val columnsToCompare = dfJSON.columns.filterNot(x => x == idColumn).zipWithIndex.map({ case (x,y) => (y,x)})
    
val output = combinedDF.rdd.map({row => {
  val empNo = row.getAs[Int](0)
  val conbinedStruct: Row = row.getAs[AnyRef]("combined_struct").asInstanceOf[Row]
    
  val nonMatchingColumns = columnsToCompare.foldLeft(List[String]())((acc, item) => {
    val counts = conbinedStruct.getAs[Seq[String]](item._1).length
    if (counts == 1) acc else item._2 :: acc
  })
    
  (empNo, nonMatchingColumns.mkString(", "))
}}).toDF(idColumn, "nonMatchingColumnNames")
    
output.show(false)

It works perfectly fine in my local machine, when I port it to spark-shell (it is an adhoc query), I am getting null pointer exception when I am trying to convert the dataframe into RDD and iterate through each item in the struct.

CodePudding user response:

You can use only spark's builtin functions to get a string containing the list of columns whose value is not unique:

  • use countDistinct to determine whether there are several values in a specific column for a specific empID
  • save name of the column if count distinct is greater than 2 using when
  • iterate over columns and save this iteration into an array using array
  • build a string from this array using concat_ws

The complete code is as below:

import org.apache.spark.sql.functions.{array, concat_ws, countDistinct, lit, when}

val output = dfJSON.groupBy("empID").agg(
  concat_ws(
    ", ",
    array(dfJSON.columns.filter(_ != "empID").map(c => when(countDistinct(c) > 1, lit(c))): _*)
  ).as("nonMatchingColumnNames")
)

And with your input dataframe, you get the following output:

 ----- --------------------------------- 
|empID|nonMatchingColumnNames           |
 ----- --------------------------------- 
|1002 |City                             |
|1000 |City                             |
|1001 |Zipcode, ZipCodeType, City, State|
|1003 |Zipcode, City, State             |
|1004 |Zipcode, City, State             |
 ----- --------------------------------- 
  • Related