I'm trying to read specific columns in a large csv file (>1GB), add a couple of new columns and then write it again.
When I try in the conventional way, the process runs out of memory:
cols = ['Event Time', 'User ID', 'Advertiser ID', 'Ad ID', 'Rendering ID',
'Creative Version', 'Placement ID', 'Country Code',
'Browser/Platform ID', 'Browser/Platform Version', 'Operating System ID']
df.insert(7, 'Creative Size ID', '')
df.insert(3, 'Buy ID', '')
df = pd.read_csv(file_name, sep=',', error_bad_lines=False, usecols=cols)
df.to_csv(file_name, sep=',', encoding='utf-8', index=False)
Is there a way to do this process more efficient?
I've used chunk iterator=True, chunksize=1000
but then when you want to write the csv you need to have all your data in memory unless df.to_csv can write by chunks. Is it possible?