0

I'm using pyobdc with an application that requires me to insert >1000 rows normally which I currently do individually with pyobdc. Though this tends to take >30 minutes to finish. I was wondering if there are any faster methods that could do this < 1 minute. I know you can use multiple values in an insert commands but according to this (Multiple INSERT statements vs. single INSERT with multiple VALUES) it would possibly be even slower. The code currently looks like this.

def Insert_X(X_info):
        columns = ', '.join(X_info.keys())
        placeholders = ', '.join('?' * len(X_info.keys()))
        columns = columns.replace("'","")
        values = [x for x in X_info.values()]
        query_string = f"INSERT INTO X ({columns}) VALUES ({placeholders});"
        with conn.cursor() as cursor:
            cursor.execute(query_string,values)

With Insert_X being called >1000 times.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
  • can you provide some sample code? – gold_cy Apr 14 '21 at 03:25
  • @Charlieface thank you for the comment I was wondering what you meant by parameterization as I was using bound parameters in the above code. – John kongtcheu Apr 14 '21 at 04:18
  • Consider using a Table-Valued parameter or bulk-insert, I don't know if pyodbc supports either of these, but they are both very fast. 30 min is an age for 1000 rows whichever way you cut it, I wonder if you have some other problem (slow CPU or HDD?) – Charlieface Apr 14 '21 at 08:05

0 Answers0