Home > OS >  Visibility of temproray tables and database tables in Spark SQL, is it possible to make a nested que
Visibility of temproray tables and database tables in Spark SQL, is it possible to make a nested que

Time:12-20

I have a DataFrame put as temprorary table

val dailySummariesDfVisualize =
      dailySummariesDf
    .orderBy("event_time").registerTempTable("raw") 

I can do some extraction from it with Spark SQL:

val df = sqlContext.sql("SELECT * FROM raw")
df.show()

And the output works. Then I'd like to do a nested query to the temprorary table inside the JDBC database query like that:

val dailySensorData =
getDFFromJdbcSource(SparkSession.builder().appName("test").master("local").getOrCreate(), 
          s"SELECT *  FROM values WHERE time in (SELECT event_time FROM raw) limit 1000000")
           .persist(StorageLevel.MEMORY_ONLY_SER)
dailySensorData.show(400, false)

And here I get the exception:

org.postgresql.util.PSQLException: ERROR: relation "raw" does not exist

If I try to execute in inside the sqlContext.sql() like that

val df = sqlContext.sql("SELECT * FROM values WHERE time in (SELECT event_time FROM raw)")
df.show()

i get:

org.apache.spark.sql.AnalysisException: Table or view not found: values; line 1 pos 14;
'Project [*]
 - 'Filter 'time IN (list#4967 [])
   :   - 'Project ['event_time]
   :      - 'UnresolvedRelation [raw]
    - 'UnresolvedRelation [values]

  at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:106)

like both values (real jdbc table) and raw (temprorary table) are not visible form it. How can I use temp table in the nested queries?

UPD

According to mazaneicha I have tried (retrieve all values here, since not able to restrics them with nested query):

val dailySummariesDfVisualize =
      dailySummariesDf
    .orderBy("event_time").createOrReplaceTempView("raw") 

val dailySensorData =
      getDFFromJdbcSource(SparkSession.builder().appName("test").master("local").getOrCreate(), 
      s"SELECT *  FROM values").createOrReplaceTempView("values")     

val df = sqlContext.sql("SELECT * FROM values WHERE time in (SELECT event_time FROM raw)")
df.explain(true)

and here is the logical plan:

= Parsed Logical Plan ==
'Project [*]
 - 'Filter 'time IN (list#5475 [])
   :   - 'Project ['event_time]
   :      - 'UnresolvedRelation [raw]
    - 'UnresolvedRelation [values]

== Analyzed Logical Plan ==
devicename: string, value: double, time: timestamp, coffee_machine_id: string, digital_twin_id: string, write_time: timestamp
Project [devicename#5457, value#5458, time#5459, coffee_machine_id#5460, digital_twin_id#5461, write_time#5462]
 - Filter time#5459 IN (list#5475 [])
   :   - Project [event_time#4836]
   :      - SubqueryAlias raw
   :         - Sort [event_time#4836 ASC NULLS FIRST], true
   :            - Relation[event_type#4835,event_time#4836,event_payload#4837,coffee_machine_id#4838,digital_twin_id#4839] JDBCRelation((SELECT *  FROM events WHERE (event_time > '2021-03-31'  or event_time < '2021-03-30') and event_type != 'Coffee_Capsule_RFID_Event' and event_type!='Coffee_Cup_RFID_Event' limit 2000000) SPARK_GEN_SUBQ_48) [numPartitions=1]
    - SubqueryAlias values
       - Relation[devicename#5457,value#5458,time#5459,coffee_machine_id#5460,digital_twin_id#5461,write_time#5462] JDBCRelation((SELECT *  FROM values) SPARK_GEN_SUBQ_65) [numPartitions=1]

== Optimized Logical Plan ==
Join LeftSemi, (time#5459 = event_time#4836)
:- Relation[devicename#5457,value#5458,time#5459,coffee_machine_id#5460,digital_twin_id#5461,write_time#5462] JDBCRelation((SELECT *  FROM values) SPARK_GEN_SUBQ_65) [numPartitions=1]
 - Project [event_time#4836]
    - Relation[event_type#4835,event_time#4836,event_payload#4837,coffee_machine_id#4838,digital_twin_id#4839] JDBCRelation((SELECT *  FROM events WHERE (event_time > '2021-03-31'  or event_time < '2021-03-30') and event_type != 'Coffee_Capsule_RFID_Event' and event_type!='Coffee_Cup_RFID_Event' limit 2000000) SPARK_GEN_SUBQ_48) [numPartitions=1]

== Physical Plan ==
SortMergeJoin [time#5459], [event_time#4836], LeftSemi
:- *(2) Sort [time#5459 ASC NULLS FIRST], false, 0
:   - Exchange hashpartitioning(time#5459, 200), true, [id=#1219]
:      - *(1) Scan JDBCRelation((SELECT *  FROM values) SPARK_GEN_SUBQ_65) [numPartitions=1] [devicename#5457,value#5458,time#5459,coffee_machine_id#5460,digital_twin_id#5461,write_time#5462] PushedFilters: [], ReadSchema: struct<devicename:string,value:double,time:timestamp,coffee_machine_id:string,digital_twin_id:str...
 - *(4) Sort [event_time#4836 ASC NULLS FIRST], false, 0
    - Exchange hashpartitioning(event_time#4836, 200), true, [id=#1224]
       - *(3) Scan JDBCRelation((SELECT *  FROM events WHERE (event_time > '2021-03-31'  or event_time < '2021-03-30') and event_type != 'Coffee_Capsule_RFID_Event' and event_type!='Coffee_Cup_RFID_Event' limit 2000000) SPARK_GEN_SUBQ_48) [numPartitions=1] [event_time#4836] PushedFilters: [], ReadSchema: struct<event_time:timestamp>

CodePudding user response:

According to mazaneicha's advice, I was able to resolve that with producing the where clause in scala from the DataFramw Rows, which are not so numerous compared to the data from whom I do the extraction query:

var collectedString = scala.collection.mutable.MutableList[String]()

for (row <- dailySummariesDfVisualize.collectAsList())
  {
      println(row(1))
      val start = row(1)
      val end = row(5)
      val timeSelection = s" time > ' ${start}' and  time < '${end}'"
      collectedString =timeSelection    
  }

val whereClause = collectedString.mkString(" or ")
println(whereClause)

val dailySensorData =
      getDFFromJdbcSource(SparkSession.builder().appName("test").master("local").getOrCreate(), 
      s"SELECT *  FROM values WHERE " whereClause " limit 1000000")
       .persist(StorageLevel.MEMORY_ONLY_SER)    

dailySensorData.show(400, false)

It does the output what I was actually needed with acceptable performance.

The formatted whereClause output is something like:

time > ' 2021-03-24 07:06:34.0' and  time < '2021-03-24 07:08:34.0' or  time > ' 2021-03-24 07:07:41.0' and  time < '2021-03-24 07:09:41.0' or  time > ' 2021-03-24 07:07:43.0' and  time < '2021-03-24 07:09:43.0'

and so on

  • Related