I have a 320 MB, comma-separated (csv) - file. To read it in, I use
pd.read_csv(loggerfile, header = 2)
I have 8 GB of Ram (5 are free), how can this ever throw an error?
File "C:\Users\me\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 443, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\me\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 235, in _read
return parser.read()
File "C:\Users\me\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 686, in read
ret = self._engine.read(nrows)
File "C:\Users\me\AppData\Local\Continuum\Anaconda\lib\site-packages\pandas\io\parsers.py", line 1130, in read
data = self._reader.read(nrows)
File "parser.pyx", line 727, in pandas.parser.TextReader.read (pandas\parser.c:7146)
File "parser.pyx", line 777, in pandas.parser.TextReader._read_low_memory (pandas\parser.c:7725)
File "parser.pyx", line 1788, in pandas.parser._concatenate_chunks (pandas\parser.c:21033)
MemoryError
EDIT:
Windows 7 Enterprise 64 Bit
Anaconda 2.0.1 x86 <- perhaps x86_64 would be better?
Still the memory error occurs way before my memory cap is reached (seen in task manager), even on a 3 Gb - 32 Bit - machine.