I am migrating from using pyodbc directly in favor of sqlalchemy as this is recommended for Pandas. I could do a simple executemany(con, df.values.tolist())
to bulk insert all rows from my pandas dataframe into a SQL Server table. After migrating, this is what I currently have:
def insert_into_table(self, con: sqlalchemy.Connection, table: str, df: pd.DataFrame):
# Create insert statement
query = f"INSERT INTO {table} (" + ",".join(df.columns) + ") VALUES (" + ",".join(["?"] * len(df.columns)) + ")"
# Replace nan to None to prevent insert errors
df = df.replace({np.nan: None})
# Insert all rows
con.execute(statement = sqlalchemy.text(query), parameters = df.to_dict(orient = "records"))
# Commit the table
con.commit()
return
The error returned by the con.execute statement is: sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('Expected 31 parameters, supplied 0', 'HY000')
The query is correct and below is some example of the df.to_dict(orient = "records")
output:
Not sure what else is missing... any help is appreciated.
Thanks
I am expecting that con.execute()
insert all dataframe rows into a SQL Server table.