I have a pandas data frame called customer_df of length 11k. I am creating a SQL query to insert values of the df to a table customer_details on postgres.
my code right now is
insert_string = 'insert into customer_details (col1,col2,col3,col4) values '
for i in range(len(customer_df)):
insert_string = insert_string + ('''('{0}','{1}','{2}','{3}'),'''.format(customer_df.iloc[i]['customer'],customer_df.iloc[i]['customerid'],customer_df.iloc[i]['loc'],customer_df.iloc[i]['pin']))
upload_to_postgres(insert_string)
this string finally gives me something like
insert into customer_details (col1,col2,col3,col4) values ('John',23,'KA',560021),('Ram',67,'AP',642918),(values of cust 3) .... (values of cust 11k) which is sent to postgres using the upload_to_postgres.
This process of creating the string takes around 30secs to 1min. Is there any better optimized way to reduce the timings?