4

I am trying to read a .csv file called ratings.csv from http://grouplens.org/datasets/movielens/20m/ the file is 533.4MB in my computer.

This is what am writing in jupyter notebook

import pandas as pd
ratings = pd.read_cv('./movielens/ratings.csv', sep=',')

The problem from here is the kernel would break or die and ask me to restart and its keeps repeating the same. There is no any error. Please can you suggest any alternative of solving this, it is as if my computer has no capability of running this.

This works but it keeps rewriting

chunksize = 20000
for ratings in pd.read_csv('./movielens/ratings.csv', chunksize=chunksize):
ratings.append(ratings)
ratings.head()

Only the last chunk is written others are written-off

Developer
  • 168
  • 1
  • 2
  • 13

2 Answers2

16

You should consider using the chunksize parameter in read_csv when reading in your dataframe, because it returns a TextFileReader object you can then pass to pd.concat to concatenate your chunks.

chunksize = 100000
tfr = pd.read_csv('./movielens/ratings.csv', chunksize=chunksize, iterator=True)
df = pd.concat(tfr, ignore_index=True)

If you just want to process each chunk individually, use,

chunksize = 20000
for chunk in pd.read_csv('./movielens/ratings.csv', 
                         chunksize=chunksize, 
                         iterator=True):
    do_something_with_chunk(chunk)
cs95
  • 379,657
  • 97
  • 704
  • 746
  • I have tried this though its not crashing but the kernel run for more than 40 mins without terminating.... and I just cancelled it. How long should I expect for 20M records to be read? – Developer Aug 24 '17 at 21:23
  • @Developer Increased chunksize and set iterator=True. Try it again. – cs95 Aug 24 '17 at 21:36
  • Can you please assist with that edits. It is fast but I have failed to append data every time it is written @cOLDsLEEP – Developer Aug 25 '17 at 09:31
  • Still there is an issue now its only take the first chunk, other chunks are not recorded, there are 20M data but that method will only keep 20K data, only the first chunk @cOLDsLEEP – Developer Aug 25 '17 at 12:21
  • @Developer I would refer you to this: https://stackoverflow.com/questions/33642951/python-using-pandas-structures-with-large-csviterate-and-chunksize – cs95 Aug 25 '17 at 12:22
  • Also https://stackoverflow.com/questions/25962114/how-to-read-a-6-gb-csv-file-with-pandas – cs95 Aug 25 '17 at 12:22
-1

try like this - 1) load with dask and then 2) convert to pandas

import pandas as pd
import dask.dataframe as dd
import time
t=time.clock()
df_train = dd.read_csv('../data/train.csv')
df_train=df_train.compute()
print("load train: " , time.clock()-t)
Yury Wallet
  • 1,474
  • 1
  • 13
  • 24