I am trying to read a 200 MB csv file using SQLalchemy. Each line has about 30 columns, of which, I use only 8 columns using the code below. However, the code runs really slow! Is there a way to improve this? I would like to use map/list comprehension or other techniques. As you an tell, I am a newbie. Thanks for your help.
for ddata in dread:
record = DailyData()
record.set_campaign_params(pdata) #Pdata is assigned in the previous step
record.set_daily_data(ddata) #data is sent to a class method where only 8 of 30 items in the list are used
session.add(record)
session.commit() #writing to the SQL database.