Home > Blockchain >  Left join on two DataFrames giving error cannot be applied to (org.apache.spark.sql.Dataset, org.apa
Left join on two DataFrames giving error cannot be applied to (org.apache.spark.sql.Dataset, org.apa

Time:07-14

I am able to read them two dataframes but joining them gives me an error which I am able to join in a notebook

val s3Reader = new S3Reader(new S3Configuration, sparkSession, "mece_gaia_gaia_property_mapping")

val geoFeaturesPropertyDF = s3Reader.get(StorageFormat.PARQUET, "s3n:"   giNewBucket   geoInsightsPath   "/properties.parquet")

val meceGaiaGaia = s3Reader.get(StorageFormat.PARQUET, "s3:"   outputBucket   gaiaMeceGaiaPropertiesMappingPath)

val meceGaiaGaiaProperties = geoFeaturesPropertyDF.join(meceGaiaGaia, meceGaiaGaia("gaia_id") === geoFeaturesPropertyDF("gaia_id"), "left")

But while joining them I am getting an error

error: overloaded method value join with alternatives:
[ERROR]   (right: org.apache.spark.sql.Dataset[_],joinExprs: org.apache.spark.sql.Column,joinType: String)org.apache.spark.sql.DataFrame <and>
[ERROR]   (right: org.apache.spark.sql.Dataset[_],usingColumns: Seq[String],joinType: String)org.apache.spark.sql.DataFrame
[ERROR]  cannot be applied to (org.apache.spark.sql.Dataset, org.apache.spark.sql.Column, String)
[ERROR]             .join(meceGaiaGaia, meceGaiaGaia("gaia_id") === geoFeaturesPropertyDF("gaia_id"), "left")

Schema for them

meceGaiaGaia Schema -

org.apache.spark.sql.types.StructType = StructType(StructField(gaia_id,StringType,true), StructField(short_name,StringType,true), StructField(long_name,StringType,true), StructField(category,StringType,true), StructField(expe_property_id,IntegerType,true), StructField(airport_code,StringType,true), StructField(mece_gaia_id,StringType,true), StructField(mece_short_name,StringType,true), StructField(mece_long_name,StringType,true), StructField(mece_category,StringType,true), StructField(province_id,StringType,true), StructField(province,StringType,true), StructField(country_id,StringType,true), StructField(country,StringType,true), StructField(continent,StringType,true), StructField(super_region,StringType,true))

geoFeaturesPropertyDF schema

org.apache.spark.sql.types.StructType = StructType(StructField(gaia_id,StringType,true), StructField(source_id,StringType,true), StructField(type,StringType,true), StructField(status,StringType,true), StructField(creation_time,StringType,true), StructField(update_time,StringType,true), StructField(attributes,MapType(StringType,StringType,true),true), StructField(ancestors_id,StringType,true), StructField(hierarchy,ArrayType(MapType(StringType,StringType,true),true),true), StructField(categories,ArrayType(StringType,true),true), StructField(classifiers_set,MapType(StringType,ArrayType(MapType(StringType,StringType,true),true),true),true), StructField(short_name,StringType,true), StructField(long_name,StringType,true), StructField(ancestors,ArrayType(StringType,true),true), StructFi

Any help is appreciated

CodePudding user response:

  val meceGaiaGaiaProperties =
  geoFeaturesPropertyDF.join(meceGaiaGaia,
  geoFeaturesPropertyDF("gaia_id" === meceGaiaGaia("gaia_id")),
  "left")

CodePudding user response:

Updated code to use

sparkSession.read.parquet instead of S3Reader and that worked

val geoFeaturesPropertyDF = sparkSession.read.parquet("s3n:"   giNewBucket   geoInsightsPath   "/properties.parquet")

val meceGaiaGaia = sparkSession.read.parquet("s3:"   outputBucket   gaiaMeceGaiaPropertiesMappingPath)

val meceGaiaGaiaProperties = geoFeaturesPropertyDF.join(meceGaiaGaia, meceGaiaGaia("gaia_id") === geoFeaturesPropertyDF("gaia_id"), "left")
  • Related