1

I have two csv files which store an id and some associated fields that I need to match. Currently, in Python 2.4, I load the csv files into a dictionary of record objects with the dictionary key as the record id. I then loop through one and match the keys in the other and do some processing.

This is all fine and works well but this is on relatively small csv files with around 60,000 records. I will soon need to deal with many millions of records and possibly multiple csv files. I am concerned about the memory load using the current method.

I was initially thinking about a simple loop in the csv reader and not bothering to load them into memory at all, but when looping though a few million records for each of the million records in the other files we're talking extremely inefficient stuff here.

So, any ideas on a good way of doing this? I'm stuck in python 2.4, I can't really change from csv files and I'd like to avoid using sql if possible. Thanks

Edit: As a ballpark figure I'm looking at up to 20 200MB files.

Captastic
  • 1,036
  • 3
  • 10
  • 19
  • The best approach could very well depend on the number of files and the size of each file. Can you put some ballpark numbers to these parameters? – NPE May 14 '12 at 10:36
  • That would have been handy to add, sorry. I would say a maximum of 200MB per file and maybe a maximum of 20 files. That's a little on the high side but I'd rather be safe than sorry. I'll update the main post. – Captastic May 14 '12 at 10:48
  • 1
    I'm not sure if this will help with the size of the data, but I'd create a CSV import utility and then store you data in SQLite database files. You could even have a table that lists the file import path and data for future reference. Being indexed it might be more effecient than trying to hold the entire thing in memory or re-write csv files. – Jay M May 14 '12 at 10:54
  • I think this is probably the best bet, I was not aware that you could do sql without a server etc. I'll have to look into it. Do you know if there is an sqlite module for python 2.4? – Captastic May 14 '12 at 12:20

1 Answers1

2

What are the reasons you want to avoid SQL?

You really want to switch to using a database of some kind. I suggest SQLite to start with; it's baked into Python as the sqlite3 module.. It has no other dependencies, uses a plain file (or RAM) for data storage - no network/server setup required - and it's dead easy to get started with.

The reasons you want to switch to a database include:

  • Much less code to write. Instead of having to write loops to look for specific elements, you can just write SELECT queries.
  • The database knows how to optimise queries in ways you haven't even thought about. It will typically be much, much faster than any pseudo-database you roll in Python.
  • You can do more complex queries. You can select rows which meet certain criteria (SELECT * FROM table WHERE...), correlate records from one table with records from another table (SELECT * FROM table1 JOIN table2...), and so forth.
Li-aung Yip
  • 12,320
  • 5
  • 34
  • 49
  • Ah right, thanks. I wasn't aware that there was a way of doing an sql db without faffing about with networks and servers. This is probably the best route for me. Do you know of a module that is in python 2.4? sqllite3 is part of 2.5 and I'm stuck with 2.4 I'm afraid. – Captastic May 14 '12 at 12:17
  • @Captastic: See http://stackoverflow.com/questions/789030/how-can-i-import-the-sqlite3-module-into-python-2-4 – Li-aung Yip May 14 '12 at 12:22