I have two csv files which store an id and some associated fields that I need to match. Currently, in Python 2.4, I load the csv files into a dictionary of record objects with the dictionary key as the record id. I then loop through one and match the keys in the other and do some processing.
This is all fine and works well but this is on relatively small csv files with around 60,000 records. I will soon need to deal with many millions of records and possibly multiple csv files. I am concerned about the memory load using the current method.
I was initially thinking about a simple loop in the csv reader and not bothering to load them into memory at all, but when looping though a few million records for each of the million records in the other files we're talking extremely inefficient stuff here.
So, any ideas on a good way of doing this? I'm stuck in python 2.4, I can't really change from csv files and I'd like to avoid using sql if possible. Thanks
Edit: As a ballpark figure I'm looking at up to 20 200MB files.