Home > database >  Spark writing data back to HDFS
Spark writing data back to HDFS

Time:11-19

I have a question about spark writing the result after computation. I know that each executor writes its result back to HDFS/local-filesystem(based on the cluster manager used) after it completes working on its partitions.

This makes sense because waiting for all executors to complete and writing the result back is not really required if you don't need any aggregation of results.

But how does the write operation work when the data needs to be sorted on a particular column ( eg ID) in ascending or descending order?

Will spark's logical plan sort partitions first based on their ID at each executor before even computations begin? In that case, any executor could complete first and start writing its result to HDFS so how does the whole framework make sure that the final result is sorted?

Thanks in advance

CodePudding user response:

From what I understood from this answer: https://stackoverflow.com/a/32888236/1206998 sorting is a process that shuffle all dataset items into "sorted" partition, which boundaries (item that are on the border) are some percentile item of on a sample of the dataset:

If we have dataset [1,5,6,8, 10, 20, 100] (in any order) and sort it into 3 partitions, we get:

  • partition 1 = [1,5,6] (sorted within partition)
  • partition 2 = [8,10] ( " )
  • partition 3 = [20,100] ( " )

And thus, any later operations can be done on each partition independently, including writing.

Keep in mind that:

  • spark manage data in-memory and depending on config, it writes partition data locally.
  • Write is done per partition, but the output files (in distributed FS like hdfs) is hidden until all data is done. Well at least for parquet writer, not sure for other writer.
  • As you can expect, sorting is an expensive operation
  • Related