Home > front end >  How can I make a select function more performant in pyspark?
How can I make a select function more performant in pyspark?

Time:12-07

When I use the following function, it takes up to 10 seconds to execute. Is there any way to make it run quicker?

def select_top_20 (df, col):
    most_data = df.groupBy(col).count().sort(f.desc("count"))
    top_20_count = most_data.limit(20).drop("count")
    top_20 = [row[col] for row in top_20_count.collect()]
    return top_20

CodePudding user response:

Hard to answer in general, the code seems fine to me. It depends on how the input DataFrame was created:

  • if it was directly read from a data source (parquet, database or so), it is an I/O problem and there is not much you can do.
  • if the DataFrame went through some processing before the function is executed and you might inspect this part. Lazy evaluation in Spark means that all this processing is done from scratch when you execute this function (instead of only the commands listed in the function). I.e. reading the data from disk, processing, everything. Persisting or caching the DataFrame somewhere in-between might speed you up considerably.
  • Related