I have a pretty straightforward pyspark SQL application (spark 2.4.4, EMR 5.29) that reads a dataframe of the schema topic, year, count:
df.show()
+--------+----+------+
| topic|year| count|
+--------+----+------+
|covid-19|2017|606498|
|covid-19|2016|454678|
|covid-19|2011| 10517|
|covid-19|2008| 6193|
|covid-19|2015|510391|
|covid-19|2013| 29551|
I then need to sort by year and collect counts to a list so that they be in ascending order, by year:
df.orderBy('year').groupBy('topic').agg(collect_list('count').alias('counts'))
The issue is, since I order by year, the number of partitions used for this stage is the number of years in my dataset. I thus get a crazy bottleneck stage where 15 out of 300 executors are utilised, leading to obvious memory spills and disk spills, eventually failing the stage due to no space left on device for the overpopulated partitions.
Even more interesting is that I found a way to circumvent this which intuitively appears to be much less efficient, but actually does work, since no bottlenecks are created:
df.groupBy('topic').pivot('year', values=range(START, FINISH)).agg(first('count')) \
.select('topic', array([col(c) for c in range(START, FINISH)]).alias('counts'))
This leads to my desired output, which is an array of counts sorted by year.
Anyone with an explanation or idea why this happens, or how best to prevent this? I found this answer which and this jira where it is basically suggested to 'add noise' to the sort by key to avoid these skew related issues.
I think it is worth mentioning that the pivot method is a better resolution than adding noise, and to my knowledge whenever sorting by a column that has a small range of values. would appreciate any info on this and alternate implementations.