Home > OS >  What is the difference between spark.shuffle.partition and spark.repartition in spark?
What is the difference between spark.shuffle.partition and spark.repartition in spark?

Time:12-10

What I understand is When we repartition any dataframe with value n, data will continue to remain on those n partitions, until you hit any shuffle stages or other value of repartition or coalesce. For Shuffle, it only comes into the play when you hit any shuffle stages and data will continue to remain on those partitions until you hit coalesce or repartition. I am right ? If yes then, can any one point out a striking difference?

CodePudding user response:

TLDR - Repartition is invoked as per developer's need but shuffle is done when there is a logical demand

I assume you're talking about config property spark.sql.shuffle.partitions and method .repartition.

As data distribution is an important aspect in any distributed environment, which not only governs parallelism but can also create adverse impacts if the distribution is uneven. However, repartitioning itself is a costly operation as it involves heavy movement of data (i.e. Shuffling). The .repartition method is used to explicitly repartition the data into new partitions - meaning to increase or decrease the number of partitions in the program based on your need. You can invoke this whenever you want.

As opposed to this, spark.sql.shuffle.partitions is a configuration property that governs the number of partitions created when a data movement happens as a result of operations like aggregations and joins.

Configures the number of partitions to use when shuffling data for joins or aggregations.

When you're performing transformations other than join or aggregation, the above configuration won't have any impact on the number of partitions the new Dataframe will have.

Your confusion between the two is due to both operations involving shuffling. While that is true, the former (i.e. repartition) is an explicit operation where the user is dictating the framework to increase or decrease the number of partitions - which in turn causes shuffling, while in case of joins/aggregation - the shuffling is caused by the operation itself.

Basically -

  • Joins/Aggregations cause shuffling which causes repartitioning
  • repartition is asked thus, shuffling has to be done

Another method coalesce make the difference clearer.

For reference, coalesce is a variant of repartition which can only lower the number of partitions, not necessarily equal in size. As it already knows the number of partitions are only to be decreased, it can perform it with minimal shuffling (just join two adjacent partitions until the number is met).

Consider your dataframe has 4 partitions but has data only in 2 of them, thus you decide to reduce the number of partitions to 2. When using coalesce spark tries to achieve this without shuffling or with minimal shuffling.

df.rdd().getNumPartitions(); // Returns 4 with size 0, 0, 2, 4
df=df.coalesce(2);           // Decrease partitions to 2
df.rdd().getNumPartitions(); // Returns 2 now with size 2, 4

So there was no shuffling involved. While the following

df1.rdd().getNumPartitions() // Returns 4
df2.rdd().getNumPartitions() // Returns 8
df1.join(df2).rdd().getNumPartitions() // Returns 200

As you've performed a join it'll always return the number of partitions based on spark.sql.shuffle.partitions

  • Related