Something similar to Spark - Group by Key then Count by Value would allow me to emulate df.series.value_counts()
the functionality of Pandas in Spark to:
The resulting object will be in descending order so that the first element is the most frequently-occurring element. Excludes NA values by default. (http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html)
I am curious if this can't be achieved nicer / simpler for data frames in Spark.