How do I perform a bulk insert with only some of the columns getting data from a CSV file?
My code is currently like this (apologies for the bad pseudo code):
with open("some_csv_file", "r") as csvFile:
# load csv data
for row in csvFile:
column1_data = row[0]
column2_data = row[1]
column3_data = row[2]
# how to bulk insert this?? All data is the same except data loaded from csv
pyodbc.execute("INSERT INTO some_table(Code, column1, column2, column3,
some_other_column, some_other_column) VALUES(?, ?, ?, ?, ?, ?)", 'xy', column1_data, column2_data, column3_data, 'abc', 123)
pyodbc.commit()
pyodbc.close()
I have seen other answer pointing towards pyodbc "executemany" but I'm struggling to figure out how to load the csv data for the particular columns that change
Thanks