1

Here's the code I have, which works perfectly fine on my friend's computer:

#!/usr/bin/python

import pandas as pd

df = pd.read_csv("report.csv")
df = df.drop("Agent Name", axis=1)
df.to_csv("agent_report_updated.csv")

Here's the error I receive on mine:

Traceback (most recent call last):
  File "./agent_calls_report.py", line 10, in <module>
    df = pd.read_csv("report.csv")
  File "/usr/lib/python3.7/site-packages/pandas/io/parsers.py", line 678, in parser_f
    return _read(filepath_or_buffer, kwds)
  File "/usr/lib/python3.7/site-packages/pandas/io/parsers.py", line 446, in _read
    data = parser.read(nrows)
  File "/usr/lib/python3.7/site-packages/pandas/io/parsers.py", line 1036, in read
    ret = self._engine.read(nrows)
  File "/usr/lib/python3.7/site-packages/pandas/io/parsers.py", line 1848, in read
    data = self._reader.read(nrows)
  File "pandas/_libs/parsers.pyx", line 876, in pandas._libs.parsers.TextReader.read
  File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._read_low_memory
  File "pandas/_libs/parsers.pyx", line 945, in pandas._libs.parsers.TextReader._read_rows
  File "pandas/_libs/parsers.pyx", line 932, in pandas._libs.parsers.TextReader._tokenize_rows
  File "pandas/_libs/parsers.pyx", line 2112, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 34 fields in line 3, saw 35

Any idea why this would work on one computer and not another? Edit: I've confirmed that we are using the same versions of both Python (3.7.1) and Pandas, the only difference is that he has a Mac while I'm on Linux.

1 Answers1

1

I believe this is a problem with encoding

try this :

import pandas as pd
df = pd.read_csv("report.csv",encoding='cp1252')
df = df.drop("Agent Name", axis=1)
df.to_csv("agent_report_updated.csv")

There are other encoding options you can try utf-8 instead of cp1252. Here is a list of encodings used.

Aditya Lahiri
  • 409
  • 3
  • 11
  • Appreciate the answer, but it doesn't look like that resolved it. I'm starting to think this is either a bug with Pandas, or something changed in 3.7.1 that's breaking the interaction. I tried this using 3.7.0 and it worked okay. – Some Dude From the Internet Nov 28 '18 at 23:49
  • 1
    I see, did you try applying some of the answers here ?https://stackoverflow.com/questions/18039057/python-pandas-error-tokenizing-data – Aditya Lahiri Nov 28 '18 at 23:53
  • I had tried a few, but the comment there about deleting a particular column ended up being the correct answer. Thank you for your help! – Some Dude From the Internet Nov 29 '18 at 00:03