12

When I try to write the dataset into parquet files, I get below error

18/11/05 06:25:43 ERROR FileFormatWriter: Aborting job null.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 84 in stage 1.0 failed 4 times, most recent failure: Lost task 84.3 in stage 1.0 (TID 989, ip-10-253-194-207.nonprd.aws.csp.net, executor 4): java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
        at org.apache.parquet.column.Dictionary.decodeToInt(Dictionary.java:48)
        at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt(OnHeapColumnVector.java:233)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

But when i give dataset.show() I am able to view the data. Not sure where to check for the root cause.

John Humanyun
  • 915
  • 3
  • 10
  • 25

3 Answers3

14

There is an easier way to detect schema differences between parquet files, use option mergeSchema which will show you the fields inconsistent in the log

example code:

spark.read.option("mergeSchema", "True").parquet(fileList:_*) 

example log:

Caused by: org.apache.spark.SparkException: Failed to merge fields 'field1' and 'field1'. Failed to merge incompatible data types DoubleType and LongType
werner
  • 13,518
  • 6
  • 30
  • 45
murat yildirim
  • 156
  • 1
  • 4
6

I faced the same problem and in my case this was due to schema differences between parquet files :

Given this parquet dir, with some files :

  • /user/user1/parquet_table/part-00000-1e73689f-69e5-471a-8510-1547d108fea3-c000.parquet
  • /user/user1/parquet_table/part-00000-276bf4c0-7214-4278-8131-53cd5339a50d-c000.parquet

When I try to coalesce them (spark2-shell) :

val parquetFileDF = spark.read.parquet("/user/user1/parquet_table/part-00000-*.parquet")
val parquetFileDFCoal = parquetFileDF.coalesce(8)
parquetFileDFCoal.write.parquet("/tmp/testTemp/0001")

I encounter this exception:

[Stage 4:> (0 + 8) / 8]20/05/13 17:09:03 WARN scheduler.TaskSetManager: Lost task 5.0 in stage 4.0 (TID 116, node.localhost.localdomain, executor 70): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:272)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:191)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:190)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
...
Caused by: java.lang.UnsupportedOperationException: parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
at parquet.column.Dictionary.decodeToInt(Dictionary.java:48)
at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt(OnHeapColumnVector.java:233)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)

If you check each file using spark2-shell, you will probably find schema differences. Here :

scala> val parquetFileDF = spark.read.parquet("/user/user1/parquet_table/part-00000-1e73689f-69e5-471a-8510-1547d108fea3-c000.parquet")
parquetFileDF: org.apache.spark.sql.DataFrame = [root_id: string, father_id: string ... 7 more fields]

scala> parquetFileDF.printSchema()
root
|-- root_id: string (nullable = true)
|-- father_id: string (nullable = true)
|-- self_id: string (nullable = true)
|-- group_name: string (nullable = true)
|-- father_name: string (nullable = true)
|-- cle: string (nullable = true)
|-- value: integer (nullable = true)


scala> val parquetFileDF = spark.read.parquet("/user/user1/parquet_table/part-00000-276bf4c0-7214-4278-8131-53cd5339a50d-c000.parquet ")
parquetFileDF: org.apache.spark.sql.DataFrame = [root_id: string, father_id: string ... 7 more fields]

scala> parquetFileDF.printSchema()
root
|-- root_id: string (nullable = true)
|-- father_id: string (nullable = true)
|-- self_id: string (nullable = true)
|-- group_name: string (nullable = true)
|-- father_name: string (nullable = true)
|-- cle: string (nullable = true)
|-- value: string (nullable = true)

You can see that sometimes the value field is Integer, sometimes String. To fix it, you have to transform one of the files to match the types.

Damien Picard
  • 381
  • 4
  • 12
1

Have you double checked there are no OutOfMemory in any log ? Any chance you are using a datatype not supported by parquet ?

Could you please give the corresponding source code showing : schema definition (case class or whatever) + write ?