64

I am reading a csv file in Pyspark as follows:

df_raw=spark.read.option("header","true").csv(csv_path)

However, the data file has quoted fields with embedded commas in them which should not be treated as commas. How can I handle this in Pyspark ? I know pandas can handle this, but can Spark ? The version I am using is Spark 2.0.0.

Here is an example which works in Pandas but fails using Spark:

In [1]: import pandas as pd

In [2]: pdf = pd.read_csv('malformed_data.csv')

In [3]: sdf=spark.read.format("org.apache.spark.csv").csv('malformed_data.csv',header=True)

In [4]: pdf[['col12','col13','col14']]
Out[4]:
                    col12                                             col13  \
0  32 XIY "W"   JK, RE LK  SOMETHINGLIKEAPHENOMENON#YOUGOTSOUL~BRINGDANOISE
1                     NaN                     OUTKAST#THROOTS~WUTANG#RUNDMC

   col14
0   23.0
1    0.0

In [5]: sdf.select("col12","col13",'col14').show()
+------------------+--------------------+--------------------+
|             col12|               col13|               col14|
+------------------+--------------------+--------------------+
|"32 XIY ""W""   JK|              RE LK"|SOMETHINGLIKEAPHE...|
|              null|OUTKAST#THROOTS~W...|                 0.0|
+------------------+--------------------+--------------------+

The contents of the file :

    col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16,col17,col18,col19
80015360210876000,11.22,X,4076710258,,,sxsw,,"32 YIU ""A""",S5,,"32 XIY ""W""   JK, RE LK",SOMETHINGLIKEAPHENOMENON#YOUGOTSOUL~BRINGDANOISE,23.0,cyclingstats,2012-25-19,432,2023-05-17,CODERED
61670000229561918,137.12,U,8234971771,,,woodstock,,,T4,,,OUTKAST#THROOTS~WUTANG#RUNDMC,0.0,runstats,2013-21-22,1333,2019-11-23,CODEBLUE
mrsrinivas
  • 34,112
  • 13
  • 125
  • 125
femibyte
  • 3,317
  • 7
  • 34
  • 59

4 Answers4

109

I noticed that your problematic line has escaping that uses double quotes themselves:

"32 XIY ""W"" JK, RE LK"

which should be interpreter just as

32 XIY "W" JK, RE LK

As described in RFC-4180, page 2 -

  1. If double-quotes are used to enclose fields, then a double-quote appearing inside a field must be escaped by preceding it with another double quote

That's what Excel does, for example, by default.

Although in Spark (as of Spark 2.1), escaping is done by default through non-RFC way, using backslah (\). To fix this you have to explicitly tell Spark to use doublequote to use as an escape character:

.option("quote", "\"")
.option("escape", "\"")

This may explain that a comma character wasn't interpreted correctly as it was inside a quoted column.

Options for Spark csv format are not documented well on Apache Spark site, but here's a bit older documentation which I still find useful quite often:

https://github.com/databricks/spark-csv

Update Aug 2018: Spark 3.0 might change this behavior to be RFC-compliant. See SPARK-22236 for details.

Tagar
  • 13,911
  • 6
  • 95
  • 110
  • shouldn't the first argument to `option` be double-quoted instead of single-quoted? – Shafique Jamal Sep 12 '17 at 16:01
  • either way is correct. https://softwareengineering.stackexchange.com/questions/155176/single-quotes-vs-double-quotes – Tagar Sep 12 '17 at 16:48
  • With spark 2.3 this give following error: ```quote cannot be more than one character``` – Josemy Mar 14 '18 at 17:03
  • It's just one character - " (one double quote character, not two single quotes). Haven't tested, should be the same in Spark 2.3 – Tagar Mar 14 '18 at 17:46
  • Thank you, this is working for me. However I have one issue where a series of 3 space characters is saving as 4 double quotes... If I trim the value, I get null value / empty-string value behavior, which is 2 double quotes in my csv file. It looks to me like only values that contain commas and double quotes are wrapped in quotes. This seems ok, however it is not what I expected. It seems like either values should always be wrapped in the quote parameter, but if quotes are only used when necessary, I would expect null and empty-string values to save as just two consecutive commas. – Dudeman3000 Mar 03 '21 at 04:46
  • I have a entry with comma which is not between quotes but excel is able to parse it properly but spark is failing any work-around for that? – Eswar Chitirala Sep 09 '21 at 16:14
  • 1
    Thanks Tagar. I'm using Spark 3.3.1 and they are still using \ – Iraj Hedayati Mar 15 '23 at 15:44
52

For anyone doing this in Scala: Tagar's answer nearly worked for me (thank you!); all I had to do was escape the double quote when setting my option param:

.option("quote", "\"")
.option("escape", "\"")

I'm using Spark 2.3, so I can confirm Tagar's solution still seems to work the same under the new release.

Allie Rogers
  • 521
  • 4
  • 3
  • The thing is that correctly created comma separated files (CSV) should include quoted and escaped columns which contain separator inside its content. Make sure you open CSV in text editors (not Excel) to verify if this is the case. – MeHow89 May 27 '22 at 11:19
12

For anyone who is still wondering if their parse is still not working after using Tagar's solution.

Pyspark 3.1.2 .option("quote", "\"") is the default so this is not necessary however in my case I have data with multiple lines and so spark was unable to auto detect \n in a single data point and at the end of every row so using .option("multiline", True) solved my issue along with .option('escape', "\"") So generally its better to use the multiline option by default

Eswar Chitirala
  • 486
  • 6
  • 14
  • I wonder if this has performance implications, since Spark then cannot split multiline values across the workers? – Triamus Sep 10 '21 at 14:06
  • I am not sure if spark can split multiline values across the workers but if you are sure your data doesn't have multiline may be you need not but in my case I am dealing with text data so I can never be sure if my data contains '\n'. – Eswar Chitirala Sep 12 '21 at 02:36
  • 1
    Indeed it has as this nice article shows https://kokes.github.io/blog/2018/05/19/spark-sane-csv-processing.html. – Triamus Sep 13 '21 at 04:57
1

Delimiter(comma) specified inside quotes will be ignored by default. Spark SQL does have inbuilt CSV reader in Spark 2.0.

df = session.read
  .option("header", "true")
  .csv("csv/file/path")

more about CSV reader here - .

mrsrinivas
  • 34,112
  • 13
  • 125
  • 125