0

I am looking for a better way of adding a large volume of rows to a table.
I am not concerned with syntax of executing the SQL statement from python as what I am using formats that differently then standard.
Is there a way I could run this solely in SQL Server without Python at all?

table1 colA, colB, colC, colD....

for i in range(1,10):
   for x in range(10,30001):
      query = "INSERT INTO table1 (colA, ColB) VALUES (%s,%s)"
      val = (i,x)
      Execute SQL statment with val parameters

I have tried the above example and just takes to long due to the number of entries.

Dale K
  • 25,246
  • 15
  • 42
  • 71
  • No, there is no way you can speed this up as-is. You need to aggregate the entries somehow, whether in a list or otherwise – roganjosh Aug 06 '23 at 15:38
  • If you have aggregated entries then there are a number of things you can do. But on a per-transaction basis... Good luck :/ – roganjosh Aug 06 '23 at 15:39
  • I think your `range` is probably a poor stand-in for what you're doing – roganjosh Aug 06 '23 at 15:42
  • Use `executemany`. Alternatively use some sort of `BULK INSERT` solution (which pyodbc does not support directly) using either `bcp.exe` or the `BULK INSERT` command from a file, or using `SqlBulkCopy` from C# or Powershell – Charlieface Aug 06 '23 at 15:42
  • You could convert your data to json and then use openjson on the "other side", it's pretty fast. Executemany is good too – siggemannen Aug 06 '23 at 17:39

0 Answers0