1

I'm attempting to bulk insert a csv into a table in SQL server. The catch is, the data doesn't match the columns of the destination table. The destination table has several audit columns that are not found in the source file. The solution I found for this is to insert into a view instead. The code is pretty simple:

from sqlalchemy import create_engine
engine = create_engine('mssql+pyodbc://[DNS]')
conn = engine.connect()
sql = "BULK INSERT [table view] FROM '[source file path]' WITH (FIELDTERMINATOR = ',',ROWTERMINATOR = '\n')"
conn.execute(sql)
conn.close()

When I run the SQL statement inside of SSMS it works perfectly. When I try to execute it from inside a Python script, the script runs but no data winds up in the table. What am I missing?

Update: It turns out bulk inserting into a normal table doesn't work either.

Bob Wakefield
  • 3,739
  • 4
  • 20
  • 30

3 Answers3

1

Before closing the connection, you need to call commit() or the SQL actions will be rolled back on connection close.

conn.commit()
conn.close()
Zusukar
  • 382
  • 2
  • 8
0

It turns out that instead of using SQL Alchemy, I had to use pypyodbc. Not sure why this worked and the other way didn't. Example code found here:How to Speed up with Bulk Insert to MS Server from Python with Pyodbc from CSV

Bob Wakefield
  • 3,739
  • 4
  • 20
  • 30
0

This works for me after checking sqlalchemy transactions refeference. I don't explicitly set conn.commit() as

The block managed by each .begin() method has the behavior such that the transaction is committed when the block completes.

with engine.begin() as conn:
  conn.execute(sql_bulk_insert)
Nuno Silva
  • 108
  • 10