15

I am reading a CSV file with 8 columns into Pandas data frame. The final column contains an error message, some of which contain commas. This causes the file read to fail with the error ParserError: Error tokenizing data. C error: Expected 8 fields in line 21922, saw 9

Is there a way to ignore all commas after the 8th field, rather than having to go through the file and remove excess commas?

Code to read file:

import pandas as pd
df = pd.read_csv('C:\\somepath\\output.csv')

Line that works:

061AE,Active,001,2017_02_24 15_18_01,00006,1,00013,some message

Line that fails:

061AE,Active,001,2017_02_24 15_18_01,00006,1,00013,longer message, with commas
FObersteiner
  • 22,500
  • 8
  • 42
  • 72
MikeS159
  • 1,884
  • 3
  • 29
  • 54
  • 1
    Updated with code and data format. – MikeS159 Feb 07 '18 at 16:26
  • See [pandas #2886](https://github.com/pandas-dev/pandas/issues/2886). This is an issue that hasn't been resolved yet. Unfortunately, I don't think there's a way around preprocessing. – 3novak Feb 07 '18 at 16:28
  • The original data is parsed from XML files. Do you think that putting each column in "quotes" would work? – MikeS159 Feb 07 '18 at 16:30
  • 1
    @MikeS159 quotes would not help, but if during parsing you can use a tab or semicolon delimiter between cells, then you can use your simple `pd.read_csv('filename')` with no issues. That would actually be ideal compared to having to deal with it later. – FatihAkici Feb 08 '18 at 16:46

4 Answers4

11

You can use the parameter usecols in the read_csv function to limit what columns you read in. For example:

import pandas as pd
pd.read_csv(path, usecols=range(8))

if you only want to read the first 8 columns.

Blazina
  • 1,206
  • 10
  • 13
  • 2
    Would that truncate the message though. I have updated my question with an example, but essentially some of the message columns contain text with a comma in. I want to read this whole message and just ignore that comma as a delimiter. – MikeS159 Feb 07 '18 at 16:25
  • This would simply just read in columns 0-7 of your CSV file, if the long text was column 8 it would not read it in. I think I misunderstood your question! – Blazina Feb 07 '18 at 16:31
9

You can use re.sub to replace the first few commas with, say, the '|', save the intermediate results in a StringIO then process that.

import pandas as pd
from io import StringIO
import re

for_pd = StringIO()
with open('MikeS159.csv') as mike:
    for line in mike:
        new_line = re.sub(r',', '|', line.rstrip(), count=7)
        print (new_line, file=for_pd)

for_pd.seek(0)

df = pd.read_csv(for_pd, sep='|', header=None)
print (df)

I put the two lines from your question into a file to get this output.

       0       1  2                    3  4  5   6  \
0  061AE  Active  1  2017_02_24 15_18_01  6  1  13   
1  061AE  Active  1  2017_02_24 15_18_01  6  1  13   

                             7  
0                 some message  
1  longer message, with commas  
Bill Bell
  • 21,021
  • 5
  • 43
  • 58
  • This is so neat! Both the `StringIO` and `re` tricks are very good. Thank you! – FatihAkici Feb 07 '18 at 20:26
  • @MikeS159: Best of luck! – Bill Bell Feb 08 '18 at 17:53
  • 1
    You're very welcome. But you shouldn't write, 'It worked' as if it's a surprise. :-) – Bill Bell Feb 13 '18 at 16:49
  • Is there a way to replace all commas in a certain amount of columns? In my cas I have 6 columns and in columns 2-6 14-15 commas. I now set the count to 15. I think this works as well, but maybe there is a more elegant solution? – Tobitor Nov 26 '20 at 13:28
  • @Tobitor: To be honest, I haven't been doing much work recently (I'm retired). You would be better to post this as a question to get the latest ideas. – Bill Bell Nov 26 '20 at 16:32
3

You can take a shot at this roundabout posted on the Pandas issues page:

import csv
import pandas as pd
import numpy as np

df = pd.read_csv('filename.csv', parse_dates=True, dtype=Object, delimiter="\t", quoting=csv.QUOTE_NONE, encoding='utf-8')

You can also preprocess the data, basically changing all first 7 (0th to 6th, both inclusive) commas to semicolons, and leaving the ones after that as commas* using something like:

to_write = []
counter = 0
with open("sampleCSV.csv", "r") as f:
    for line in f:
        while counter < 7:
            line = list(line)
            line[line.index(",")] = ";"
            counter += 1
        counter = 0
        to_write.append("".join(line))

You can now read this to_write list as a Pandas object like

data = pd.DataFrame(to_write)
data = pd.DataFrame(data[0].str.split(";").values.tolist()),

or write it back into a csv and read using pandas with a semicolon delimiter such as read_csv(csv_path, sep=';').

I kinda drafted this quickly without rigorous testing, but should give you some ideas to try. Please comment if it does or doesn't help, and I'll edit it.

*Another option is to delete all commas after 7th, and keep using the comma separator. Either way the point is to differentiate the first 7 delimiters from the subsequent punctuation.

FatihAkici
  • 4,679
  • 2
  • 31
  • 48
0

to join @Tblaz answer If you use GoogleColab you can use this solution, in my case the extra comma was on the column 24 so I have only to read 23 columns:

import pandas as pd
from google.colab import files
import io
uploaded = files.upload()
x_train = pd.read_csv(io.StringIO(uploaded['x_train.csv'].decode('utf-8')), skiprows=1, usecols=range(23) ,header=None)
DINA TAKLIT
  • 7,074
  • 10
  • 69
  • 74