As referenced, I've created a collection of data (40k rows, 5 columns) within Python that I'd like to insert back into a SQL Server table.
Typically, within SQL I'd make a 'select * into myTable from dataTable'
call to do the insert, but the data sitting within a pandas dataframe obviously complicates this.
I'm not formally opposed to using SQLAlchemy (though would prefer to avoid another download and install), but would prefer to do this natively within Python, and am connecting to SSMS using pyodbc.
Is there a straightforward way to do this that avoids looping (ie, insert row by row)?