2

I have a large directory of text files--approximately 7 GB. I need to load them quickly into Python unicode strings in iPython. I have 15 GB of memory total. (I'm using EC2, so I can buy more memory if absolutely necessary.)

Simply reading the files will be too slow for my purposes. I have tried copying the files to a ramdisk and then loading them from there into iPython. That speeds things up but iPython crashes (not enough memory left over?) Here is the ramdisk setup:

mount -t tmpfs none /var/ramdisk -o size=7g

Anyone have any ideas? Basically, I'm looking for persistent in-memory Python objects. The iPython requirement precludes using IncPy: http://www.stanford.edu/~pgbovine/incpy.html .

Thanks!

m9389e
  • 21
  • 2
  • related thread: http://stackoverflow.com/questions/1268252/python-possible-to-share-in-memory-data-between-2-separate-processes – m9389e Sep 05 '10 at 22:09
  • 5
    Do you really need all that data in memory at once? Really? – Donal Fellows Sep 05 '10 at 22:20
  • This is almost certainly the wrong thing to do. Python isn't designed for storing gigs of data. Whether as a few huge objects or millions of single line strings, either will stress the VM in ways it's not designed for. You'll have a hard time getting help with this without explaining why you really need to do this, instead of loading the data into a database backend designed for this sort of storage. – Glenn Maynard Sep 06 '10 at 00:33
  • I could use a database backend. Which one do you recommend? As to whether I need all the data in memory, I don't really know. I'm searching for substrings with the Aho-Corasick algorithm. I just assumed it was good practice to get everything into memory... – m9389e Sep 06 '10 at 01:28

2 Answers2

3

There is much that is confusing here, which makes it more difficult to answer this question:

  • The ipython requirement. Why do you need to process such large data files from within ipython instead of a stand-alone script?
  • The tmpfs RAM disk. I read your question as implying that you read all of your input data into memory at once in Python. If that is the case, then python allocates its own buffers to hold all the data anyway, and the tmpfs filesystem only buys you a performance gain if you reload the data from the RAM disk many, many times.
  • Mentioning IncPy. If your performance issues are something you could solve with memoization, why can't you just manually implement memoization for the functions where it would help most?

So. If you actually need all the data in memory at once -- if your algorithm reprocesses the entire dataset multiple times, for example -- I would suggest looking at the mmap module. That will provide the data in raw bytes instead of unicode objects, which might entail a little more work in your algorithm (operating on the encoded data, for example), but will use a reasonable amount of memory. Reading the data into Python unicode objects all at once will require either 2x or 4x as much RAM as it occupies on disk (assuming the data is UTF-8).

If your algorithm simply does a single linear pass over the data (as does the Aho-Corasick algorithm you mention), then you'd be far better off just reading in a reasonably sized chunk at a time:

with codecs.open(inpath, encoding='utf-8') as f:
    data = f.read(8192)
    while data:
        process(data)
        data = f.read(8192)

I hope this at least gets you closer.

llasram
  • 4,417
  • 28
  • 28
  • Thank you for your patient response. I am trying to write a simplified version of the BLAST algorithm that operates on arbitrary alphabets. (BLAST restricts you to DNA, RNA, etc.) The first step involves locating exact substrings of a query in a large database with an algorithm like Aho-Corasick. The second step involves searching around these substrings for longer strings that are similar to the entire query (with an algorithm like Smith-Waterman). (I am new to these things, as you can tell...) – m9389e Sep 09 '10 at 22:47
  • I chose iPython for its parallel processing capabilities. The second step especially is expensive, and I'm running it on a powerful machine in Amazon EC2. Also, the interactive environment seems important as I explore these things... – m9389e Sep 09 '10 at 22:51
  • I can definitely get my entire database into 17 GB of memory, but the load time has been prohibitively slow for testing. It feels like a mistake to waste so much EC2 time reading from disk. – m9389e Sep 09 '10 at 22:54
  • As far as I understand your requirements, you *definitely* want to use `mmap`. Get your data into a form where the on-disk representation matches the in-memory representation, then use `mmap` to map it into your address space. The kernel will need to read it into memory once as you first access each part of the data, but you have enough RAM that it will stay cached for subsequent reads, even in new processes. Re: ipython, I don't follow what you mean about its parallel processing capabilities, which AFAIK are no other than what you can do in Python normally. – llasram Sep 10 '10 at 10:26
  • Huh -- I hadn't realized IPython provided it's own multiprocessing capabilities. Interesting. – llasram Sep 10 '10 at 10:34
  • iPython has some functionality that makes parallel processing very easy: http://ipython.scipy.org/doc/stable/html/parallel/parallel_multiengine.html . I've had better luck with it than rolling my own functions with the multiprocessing module. – m9389e Sep 11 '10 at 19:45
  • llasram: Thank you. I have begun working with mmap, and I think I am converging on a workable setup. – m9389e Sep 11 '10 at 19:47
2

I saw the mention of IncPy and IPython in your question, so let me plug a project of mine that goes a bit in the direction of IncPy, but works with IPython and is well-suited to large data: http://packages.python.org/joblib/

If you are storing your data in numpy arrays (strings can be stored in numpy arrays), joblib can use memmap for intermediate results and be efficient for IO.

Gael Varoquaux
  • 2,466
  • 2
  • 24
  • 12