I've a large CSV files (about a million records). I want to process write each record into a DB.
Since loading the complete file into the RAM makes no sense, hence I need to read the file in chunks (or any other better way).
So, I wrote this code .
import csv
with open ('/home/praful/Desktop/a.csv') as csvfile:
config_file = csv.reader(csvfile, delimiter = ',', quotechar = '|')
print config_file
for row in config_file:
print row
I guess it loads everything into its memory first and then process.
Upon looking at this thread and many others, I didnt see any difference in o/p code and the solution. Kindly advise, is it the only method for efficient processing of csv files