0

I'm trying to load ~2GB of text files (approx 35K files) in my python script. I'm getting a memory error around a third of the way through on page.read(). I'

for f in files:
    page = open(f)
    pageContent = page.read().replace('\n', '')
    page.close()

    cFile_list.append(pageContent)

I've never dealt with objects or processes of this size in python. I checked some of other Python MemoryError related threads but I couldn't get anything to fix my scenario. Hopefully there is something out there that can help me out.

Greg
  • 1,070
  • 11
  • 16
  • 3
    You'll want to read the input in chunks. Take a look at the answer to this question: http://stackoverflow.com/questions/519633/lazy-method-for-reading-big-file-in-python – Kris K. Jun 23 '11 at 16:10
  • 1
    If you're using a 64-bit machine, try using a 64-bit python build. – Wooble Jun 23 '11 at 16:12
  • I dont understand why are you loading all the contents of all the files in cFile_list. What exactly want to do with the contents of the file? I think perhaps you want to save the contents of each file to another corresponding file after replacing '\n's with ''. If this is what you want to do then you can save the contents to a new file there itself in the for loop and then you wont get any memory error no matter for how many files you do this. – Pushpak Dagade Jun 23 '11 at 16:57
  • @ Kris K. I think it is not the size of the file which is causing memory problems, but it is size of cFile_list object which is growing enormously after every loop (see my previous comment). So reading in chunks wont help. In fact, the question itself is vague it seems. – Pushpak Dagade Jun 23 '11 at 17:00

3 Answers3

2

You are trying to load too much into memory at once. This can be because of the process size limit (especially on a 32 bit OS), or because you don't have enough RAM.

A 64 bit OS (and 64 bit Python) would be able to do this ok given enough RAM, but maybe you can simply change the way your program is working so not every page is in RAM at once.

What is cFile_list used for? Do you really need all the pages in memory at the same time?

John La Rooy
  • 295,403
  • 53
  • 369
  • 502
  • cFile_list is a big list of documents. It ends up becoming the training and test set for a Naive Bayes Classifier. What would the alternative be as far as not having everything in memory at the same time? – Greg Jun 23 '11 at 16:57
  • 1
    @Greg, can you change your program to loop through the _filenames_. For each filename, read the file, clean up the file, feed file to the classifier, close the file. This way only one file needs to be in ram at once. – John La Rooy Jun 23 '11 at 23:52
1

Consider using generators, if possible in your case:

file_list = []
for file_ in files:
    file_list.append(line.replace('\n', '') for line in open(file_))

file_list now is a list of iterators which is more memory-efficient than reading the whole contents of each file into a string. As soon es you need the whole string of a particular file, you can do

string_ = ''.join(file_list[i])

Note, however, that iterating over file_list is only possible once due to the nature of iterators in Python.

See http://www.python.org/dev/peps/pep-0289/ for more details on generators.

jena
  • 8,096
  • 1
  • 24
  • 23
  • Ok thanks. I was able to load all the files, but when I try to do the join, i get the following: ValueError: I/O operation on closed file – Greg Jun 23 '11 at 16:55
  • My fault: The files will be closed outside of with's scope. I edited the code. Note that you should also ensure that opening the file does not fail. – jena Jun 23 '11 at 17:09
0

This is not effective way to read whole file in memory.

Right way - get used to indexes.

Firstly you need to complete dictionary with start position of each line (key is line number, and value – cumulated length of previous lines).

t = open(file,’r’)
dict_pos = {}

kolvo = 0
length = 0
for each in t:
    dict_pos[kolvo] = length
    length = length+len(each)
    kolvo = kolvo+1

and ultimately, aim function:

def give_line(line_number):
    t.seek(dict_pos.get(line_number))
    line = t.readline()
    return line

t.seek(line_number) – command that execute pruning of file up to line inception. So, if you next commit readline – you obtain your target line. Using such approach (directly to handle to necessary position of file without running through the whole file) you are saving significant part of time and can handle huge files.

user3810114
  • 61
  • 1
  • 2