16

Something similar to Spark - Group by Key then Count by Value would allow me to emulate df.series.value_counts() the functionality of Pandas in Spark to:

The resulting object will be in descending order so that the first element is the most frequently-occurring element. Excludes NA values by default. (http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html)

I am curious if this can't be achieved nicer / simpler for data frames in Spark.

zero323
  • 322,348
  • 103
  • 959
  • 935
Georg Heiler
  • 16,916
  • 36
  • 162
  • 292

1 Answers1

18

It is just a basic aggregation, isn't it?

df.groupBy($"value").count.orderBy($"count".desc)

Pandas:

import pandas as pd

pd.Series([1, 2, 2, 2, 3, 3, 4]).value_counts()
2    3
3    2
4    1
1    1
dtype: int64

Spark SQL:

Seq(1, 2, 2, 2, 3, 3, 4).toDF("value")
  .groupBy($"value").count.orderBy($"count".desc)
+-----+-----+
|value|count|
+-----+-----+
|    2|    3|
|    3|    2|
|    1|    1|
|    4|    1|
+-----+-----+

If you want to include additional grouping columns (like "key") just put these in the groupBy:

df.groupBy($"key", $"value").count.orderBy($"count".desc)
zero323
  • 322,348
  • 103
  • 959
  • 935
  • Im trying to use this in a udf to apply this for each row of a dask dataframe, however when I define the UDF Im getting an error in the syntax due to the $ symbol – Eduardo EPF Oct 15 '20 at 16:38