0

I am trying to write a CSV file with snappy compression. The code I have written for it is -

df.write.format('csv').option("compression","snappy").option('header','true').save('R')

But I encounter the following error everytime -

Caused by: com.univocity.parsers.common.TextWritingException: Error writing row.
Internal state when error was thrown: recordCount=1809, recordData=[Asia, Indonesia, Office Supplies, Offline, C, 5/28/2011, 331864631, 6/5/2011, 4974, 651.21, 524.96, 3239118.5, 2611151.0, 627967.5]
    at com.univocity.parsers.common.AbstractWriter.throwExceptionAndClose(AbstractWriter.java:1055)
    at com.univocity.parsers.common.AbstractWriter.writeRow(AbstractWriter.java:834)
    at org.apache.spark.sql.catalyst.csv.UnivocityGenerator.write(UnivocityGenerator.scala:103)
    at org.apache.spark.sql.execution.datasources.csv.CsvOutputWriter.write(CsvOutputWriter.scala:46)
    at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:175)
    at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithMetrics(FileFormatDataWriter.scala:85)
    at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithIterator(FileFormatDataWriter.scala:92)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:304)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1496)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:311)
    ... 9 more
Caused by: com.univocity.parsers.common.TextWritingException: Error writing row.
Internal state when error was thrown: recordCount=1809, recordCharacters=Asia,Indonesia,Office Supplies,Offline,C,5/28/2011,331864631,6/5/2011,4974,651.21,524.96,3239118.5,2611151.0,627967.5

    at com.univocity.parsers.common.AbstractWriter.throwExceptionAndClose(AbstractWriter.java:1040)
    at com.univocity.parsers.common.AbstractWriter.internalWriteRow(AbstractWriter.java:949)
    at com.univocity.parsers.common.AbstractWriter.writeRow(AbstractWriter.java:832)
    ... 17 more
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Ljava/nio/ByteBuffer;IILjava/nio/ByteBuffer;I)I
    at org.apache.hadoop.shaded.org.xerial.snappy.SnappyNative.rawCompress(Native Method)
    at org.apache.hadoop.shaded.org.xerial.snappy.Snappy.compress(Snappy.java:151)
    at org.apache.hadoop.io.compress.snappy.SnappyCompressor.compressDirectBuf(SnappyCompressor.java:282)
    at org.apache.hadoop.io.compress.snappy.SnappyCompressor.compress(SnappyCompressor.java:210)
    at org.apache.hadoop.io.compress.BlockCompressorStream.compress(BlockCompressorStream.java:149)
    at org.apache.hadoop.io.compress.BlockCompressorStream.finish(BlockCompressorStream.java:142)
    at org.apache.hadoop.io.compress.BlockCompressorStream.write(BlockCompressorStream.java:100)
    at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
    at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282)
    at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
    at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207)
    at com.univocity.parsers.common.input.WriterCharAppender.writeCharsAndReset(WriterCharAppender.java:153)
    at com.univocity.parsers.common.AbstractWriter.internalWriteRow(AbstractWriter.java:946)
    ... 18 more

The code works absolutely fine if I omit the compression and write an uncompressed CSV file.

Any fix for the particular error ?

  • Have you tried setting env variable `HADOOP_USER_CLASSPATH_FIRST` to `true`? – vladsiv Nov 10 '21 at 08:04
  • @VladSiv, Sir I added `os.environ["HADOOP_USER_CLASSPATH_FIRST"] = "true"` but still it shows the same error – Techie Baba Nov 10 '21 at 08:52
  • Hmm, strange. Please see this: [Apache Spark - Parquet / Snappy compression error](https://stackoverflow.com/questions/44063940/apache-spark-parquet-snappy-compression-error) – vladsiv Nov 10 '21 at 08:57

0 Answers0