Home > database >  Split a single column into multiple columns in Spark using Java
Split a single column into multiple columns in Spark using Java

Time:11-28

I want to create multiple columns from a single column in Spark with Java. I have tried multiple methods including the answer from this question given in scala but I can't seem to make it work in Java.

For example I have this column with a very long sequence (about 100):

 --------------------------------- 
|                data             |
 --------------------------------- 
|1111:1111:1111:2222:6666:1111....|
|ABC2:XYZ2:GDH2:KLN2:JUL2:HAI2....|
 --------------------------------- 

I tried using IntStream.range(0,16) to replicate the answer in Java but it does not work.

One example I tried that does not work is: df.withColumn("temp", IntStream.range(0,100).map(i->split(col("temp"),":").getItem(i).as(col("col" i)))); I used a variation of the above but never got it to work.

I want to get this output:

 ------------------------------------------- 
|col1|col2|col3|col4|col5|col6|col...|col100|
 ------------------------------------------- 
|1111|1111|1111|2222|6666|1111|......| 9999 |
|ABC2|XYZ2|GDH2|KLN2|JUL2|HAI2|......| PAHD |
 ------------------------------------------- 

A for loop on this is very slow so it is not feasible.

Thank you.

CodePudding user response:

For anybody who encounters a similar problem, the solution is to use IntStream, map each column as an object and finally convert it to a list of columns.

Here is the answer:

import scala.collection.JavaConverters;
import java.util.List;
import org.apache.spark.sql.Column;
import java.util.stream.Collectors;
import java.util.stream.IntStream;

List<Column> c = IntStream.range(0, 100).mapToObj(i->col("data").getItem(i).as("col" i)).sequential().collect(Collectors.toList());
df.select(JavaConverters.asScalaIteratorConverter(c.iterator()).asScala().toSeq());
  • Related