21

I've got a fairly simple job coverting log files to parquet. It's processing 1.1TB of data (chunked into 64MB - 128MB files - our block size is 128MB), which is approx 12 thousand files.

Job works as follows:

 val events = spark.sparkContext
  .textFile(s"$stream/$sourcetype")
  .map(_.split(" \\|\\| ").toList)
  .collect{case List(date, y, "Event") => MyEvent(date, y, "Event")}
  .toDF()

df.write.mode(SaveMode.Append).partitionBy("date").parquet(s"$path")

It collects the events with a common schema, converts to a DataFrame, and then writes out as parquet.

The problem I'm having is that this can create a bit of an IO explosion on the HDFS cluster, as it's trying to create so many tiny files.

Ideally I want to create only a handful of parquet files within the partition 'date'.

What would be the best way to control this? Is it by using 'coalesce()'?

How will that effect the amount of files created in a given partition? Is it dependent on how many executors I have working in Spark? (currently set at 100).

user3030878
  • 255
  • 1
  • 3
  • 6
  • not related to the question, but you should not collect your data (first statement), rather use `map` on your `RDD` – Raphael Roth Jun 09 '17 at 14:12
  • @RaphaelRoth this collect is different. This is more like filter -> map https://github.com/apache/spark/blob/v2.1.1/core/src/main/scala/org/apache/spark/rdd/RDD.scala#L959 – eliasah Jun 09 '17 at 14:13
  • **@user3030878** how did you get `Spark` to write exactly 64 MB / 128 MB files? My `Spark` job gives tiny (1-2 MB each) files (no of files = default = 200). I cannot simply invoke `repartition(n)` to have approx 128 MB files each because `n` will vary greatly from one-job to another. – y2k-shubham Feb 21 '19 at 11:18

6 Answers6

17

you have to repartiton your DataFrame to match the partitioning of the DataFrameWriter

Try this:

df
.repartition($"date")
.write.mode(SaveMode.Append)
.partitionBy("date")
.parquet(s"$path")
Raphael Roth
  • 26,751
  • 15
  • 88
  • 145
  • How can I read parquet files from multiple root directories and merge them? – Visualisation App Aug 22 '18 at 12:54
  • @Raphael Doesn't `partitionBy(date)` reduces down the partitions to count of distinct dates? I think the DF has same no. of partitions as the distinct date count. Am I correct? – shriyog Oct 31 '18 at 12:57
  • I also noted in the past that df .repartition(n,$"date") also gives the same result. Is that just by chance? Find in general the API interface and such not so logical. – thebluephantom Feb 20 '19 at 18:00
6

In Python you can rewrite Raphael's Roth answer as:

(df
  .repartition("date")
  .write.mode("append")
  .partitionBy("date")
  .parquet("{path}".format(path=path)))

You might also consider adding more columns to .repartition to avoid problems with very large partitions:

(df
  .repartition("date", another_column, yet_another_colum)
  .write.mode("append")
  .partitionBy("date)
  .parquet("{path}".format(path=path)))
4

The simplest solution would be to replace your actual partitioning by :

df
 .repartition(to_date($"date"))
 .write.mode(SaveMode.Append)
 .partitionBy("date")
 .parquet(s"$path")

You can also use more precise partitioning for your DataFrame i.e the day and maybe the hour of an hour range. and then you can be less precise for writer. That actually depends on the amount of data.

You can reduce entropy by partitioning DataFrame and the write with partition by clause.

10465355
  • 4,481
  • 2
  • 20
  • 44
eliasah
  • 39,588
  • 11
  • 124
  • 154
3

I came across the same issue and I could using coalesce solved my problem.

df
  .coalesce(3) // number of parts/files 
  .write.mode(SaveMode.Append)
  .parquet(s"$path")

For more information on using coalesce or repartition you can refer to the following spark: coalesce or repartition

10465355
  • 4,481
  • 2
  • 20
  • 44
Jai Prakash
  • 2,461
  • 1
  • 26
  • 27
2

Duplicating my answer from here: https://stackoverflow.com/a/53620268/171916

This is working for me very well:

data.repartition(n, "key").write.partitionBy("key").parquet("/location")

It produces N files in each output partition (directory), and is (anecdotally) faster than using coalesce and (again, anecdotally, on my data set) faster than only repartitioning on the output.

If you're working with S3, I also recommend doing everything on local drives (Spark does a lot of file creation/rename/deletion during write outs) and once it's all settled use hadoop FileUtil (or just the aws cli) to copy everything over:

import java.net.URI
import org.apache.hadoop.fs.{FileSystem, FileUtil, Path}
// ...
  def copy(
          in : String,
          out : String,
          sparkSession: SparkSession
          ) = {
    FileUtil.copy(
      FileSystem.get(new URI(in), sparkSession.sparkContext.hadoopConfiguration),
      new Path(in),
      FileSystem.get(new URI(out), sparkSession.sparkContext.hadoopConfiguration),
      new Path(out),
      false,
      sparkSession.sparkContext.hadoopConfiguration
    )
  }
Narfanator
  • 5,595
  • 3
  • 39
  • 71
0

how about trying running scripts like this as map job consolidating all the parquet files into one:

$ hadoop jar /usr/hdp/2.3.2.0-2950/hadoop-mapreduce/hadoop-streaming-2.7.1.2.3.2.0-2950.jar \
 -Dmapred.reduce.tasks=1 \
 -input "/hdfs/input/dir" \
 -output "/hdfs/output/dir" \
 -mapper cat \
 -reducer cat
sumitya
  • 2,631
  • 1
  • 19
  • 32
Jeff A.
  • 81
  • 1
  • 4