I have a pandas dataframe of approx 300,000 rows (20mb), and want to write to a SQL server database.
I have the following code but it is very very slow to execute. Wondering if there is a better way?
import pandas
import sqlalchemy
engine = sqlalchemy.create_engine('mssql+pyodbc://rea-eqx-dwpb/BIWorkArea?
driver=SQL+Server')
df.to_sql(name='LeadGen Imps&Clicks', con=engine, schema='BIWorkArea',
if_exists='replace', index=False)