0

I have a CSV with about a million rows and I want to upload it to a SQL Server database.

In the past, I usually upload CSVs with code the looks something like this

conn = pyodbc.connect('Driver={ODBC Driver 11 for SQL Server};'
                      'SERVER=Server Name;'
                      'Database=Database Name;'
                      'UID=User ID;'
                      'PWD=Password;')
cursor= conn.cursor()
conn.commit()
#Inserting data in SQL Table:- 
for index,row in df.iterrows():
    cursor.execute("INSERT INTO Table Name([A],[B],[C],) values (?,?,?)", row['A'],row['B'],row['C']) 
conn.commit()
cursor.close()
conn.close()

It takes an extremely long amount of time to upload with this code because it goes row by row.

Is it possible for me to upload the CSV in a single transaction?

Cauder
  • 2,157
  • 4
  • 30
  • 69
  • Is it possible that fast execute takes down my server? I want to be concientuous about overloading the number of requests I send to it – Cauder Jun 10 '22 at 22:19
  • But yeah you can mark this as a dup, I think that is basically my problem – Cauder Jun 10 '22 at 22:20
  • That's the old way to do it. You should consider df.to_sql: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html – ASH Jun 11 '22 at 13:44

0 Answers0