2

I am running a groupBy() on a dataset having several millions of records and want to save the resulting output (a PySpark GroupedData object) so that I can de-serialize it later and resume from that point (running aggregations on top of that as needed).

df.groupBy("geo_city")
<pyspark.sql.group.GroupedData at 0x10503c5d0>

I want to avoid converting the GroupedData object into DataFrames or RDDs in order to save it to text file or Parquet/Avro format (as the conversion operation is expensive). Is there some other efficient way to store the GroupedData object into some binary format for faster read/write? Possibly some equivalent of pickle in Spark?

zero323
  • 322,348
  • 103
  • 959
  • 935
Params Raman
  • 85
  • 1
  • 6

1 Answers1

3

There is none because GroupedData is not really a thing. It doesn't perform any operations on data at all. It only describes how actual aggregation should proceed when you execute an action on the results of a subsequent agg.

You could probably serialize underlaying JVM object and restore it later but it is a waste of time. Since groupBy only describes what has to be done the cost of recreating GroupedData object from scratch should be negligible.

zero323
  • 322,348
  • 103
  • 959
  • 935