In python
, I have a process to select data from one database (Redshift
via psycopg2
), then insert that data into SQL Server
(via pyodbc
). I chose to do a read / write rather than a read / flat file / load because the row count is around 100,000 per day. Seemed easier to simply connect and insert. However - the insert process is slow, taking several minutes.
Is there a better way to insert data into SQL Server with Pyodbc?
select_cursor.execute(output_query)
done = False
rowcount = 0
while not done:
rows = select_cursor.fetchmany(10000)
insert_list = []
if rows == []:
done = True
break
for row in rows:
rowcount += 1
insert_params = (
row[0],
row[1],
row[2]
)
insert_list.append(insert_params)
insert_cnxn = pyodbc.connect('''Connection Information''')
insert_cursor = insert_cnxn.cursor()
insert_cursor.executemany("""
INSERT INTO Destination (AccountNumber, OrderDate, Value)
VALUES (?, ?, ?)
""", insert_list)
insert_cursor.commit()
insert_cursor.close()
insert_cnxn.close()
select_cursor.close()
select_cnxn.close()