I have a pandas dataframe which i want to write over to sql database
dfmodwh
date subkey amount age
09/12 0012 12.8 18
09/13 0009 15.0 20
there is an existing table in sql warehouse with the same column names. The table is called dim.h2oresults
I tried
import pyodbc
conn = pyodbc.connect('dsn=azure_warehouse_dev;'
'Trusted_Connection=yes;')
from sqlalchemy import create_engine, MetaData, Table, select
dfmodwh.to_sql(name='dim.h2oresults',con=conn, index=False, if_exists='append')
But this just gives me an execution error. Is there a way to write to the table through pyodbc instead of sqlalchemy such that if there is a new data everyday in dfmodwh it just keeps appending and not over writing?