5

I want to read the file f in chunks to a dataframe. Here is part of a code that I used.

for i in range(0, maxline, chunksize):
df = pandas.read_csv(f,sep=',', nrows=chunksize, skiprows=i)
df.to_sql(member, engine, if_exists='append',index= False, index_label=None, chunksize=chunksize)

I get the error:

pandas.io.common.EmptyDataError: No columns to parse from file

The code works only when the chunksize >= maxline (which is total lines in file f). However, in my case, the chunksize<=maxline.

Please advise the fix.

Geet
  • 2,515
  • 2
  • 19
  • 42

1 Answers1

5

I think it is better to use the parameter chunksize in read_csv. Also, use concat with the parameter ignore_index, because of the need to avoid duplicates in index:

chunksize = 5
TextFileReader = pd.read_csv(f, chunksize=chunksize)

df = pd.concat(TextFileReader, ignore_index=True)

See pandas docs.

Ami Tavory
  • 74,578
  • 11
  • 141
  • 185
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
  • Thanks! Now I get df as TextFileReader. The next step of my code demands df to be a dataframe. How can I convert TextFileReader to dataframe? – Geet Sep 08 '16 at 07:54
  • My actual data is about 85GB. Wouldn't concatenation make the datafram big? I want to use chunksize to read and write in chunks. Please advise. – Geet Sep 08 '16 at 08:00
  • 1
    Yes, it will be very big. Maybe you can check [question](http://stackoverflow.com/questions/14262433/large-data-work-flows-using-pandas). – jezrael Sep 08 '16 at 08:03
  • That looks very difficult for a novice like me. "df = pandas.read_csv(f,sep=',', nrows=chunksize, skiprows=i)" actually gives dataframe. Can't this be modified to solve my problem. Updated the question. Thanks! – Geet Sep 08 '16 at 08:07
  • I use your solution some time ago and I get same error. Unfortunately I never use `to_sql`, so I cant help you with it. – jezrael Sep 08 '16 at 08:20