Home > front end >  Scala Getting java.lang.ClassCastException: java.math.BigDecimal cannot be cast to java.lang.Double
Scala Getting java.lang.ClassCastException: java.math.BigDecimal cannot be cast to java.lang.Double

Time:05-05

My query is like

val demo = spark.sql(s"select PERCENTILE_APPROX(weight,array(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9)) from db.demo_input_data")
scala> demo.first.getList(0)
res0: java.util.List[Nothing] = [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]

My ultimate goal is to get data from following command

scala> val BenchmarkPercentile = ":10"
scala> demo.first.getList(0).map((x: Double) => x   BenchmarkPercentile)

When I run this command I get following error

java.lang.ClassCastException: java.math.BigDecimal cannot be cast to java.lang.Double
  at scala.runtime.BoxesRunTime.unboxToDouble(BoxesRunTime.java:114)
  at $anonfun$1.apply(<console>:31)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
  at scala.collection.AbstractTraversable.map(Traversable.scala:104)
  ... 51 elided

On the other hand if my dataset is different where if I get following data, command works fine

scala> val demo1 = spark.sql(s"select PERCENTILE_APPROX(weight,array(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9)) from db.demo_input_data1")
scala> demo1.first.getList(0)
res1: java.util.List[Nothing] = [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0]

Please help me understand why is this behaviour.
NOTE:- I have also issued below command before running above code

scala> import scala.collection.JavaConversions._

CodePudding user response:

The problem here is that you need to handle the correct types. I mean, your sql query creates a dataframe with a specific schema. In your case, you create the demo1 dataframe and get the first row. Then, the first column as a java.util.List, so you are taking your dataframe to the jvm(Scala types) world. Because Dataframe api is untyped, unlike the Dataset api, you have to deal with that type mismatch manually:

 demo.first.getList(0).map((x: BigDecimal) => x   BenchmarkPercentile)

take a look to the Spar doc regarding types: https://spark.apache.org/docs/latest/sql-ref-datatypes.html

  • Related