3

We're considering using Spark Structured Streaming on a project. The input and output are parquet files on S3 bucket. Is it possible to control the size of the output files somehow? We're aiming at output files of size 10-100MB. As I understand, in traditional batch approach we could determine the output file sizes by adjusting the amount of partitions according to the size of the input dataset, is something similar possible in Structured Streaming?

piet.t
  • 11,718
  • 21
  • 43
  • 52
r.gl
  • 31
  • 1
  • 2

1 Answers1

4

In Spark 2.2 or later the optimal option is to set spark.sql.files.maxRecordsPerFile

spark.conf.set("spark.sql.files.maxRecordsPerFile", n)

where n is tuned to reflect an average size of a row.

See

user10938362
  • 3,991
  • 2
  • 12
  • 29
  • Thanks! Strangely, its not documented in the official docs. It should eliminate too big files. Any ideas what to do about too small files? – r.gl Feb 14 '19 at 12:44
  • Also, I found this similar option `spark.sql.files.maxPartitionBytes` here: https://spark.apache.org/docs/latest/sql-performance-tuning.html#other-configuration-options – r.gl Feb 14 '19 at 12:50
  • 1
    `maxPartitionBytes` is a reader option, not a writer one. As of your other question - coalescing / repartitioning is the only option, and really not a good or tunable one. – user10938362 Feb 15 '19 at 15:02
  • @r.gl now it is documented here https://spark.apache.org/docs/latest/configuration.html – scarface Jul 27 '21 at 09:04