I have a file (roughly 1000000 rows and 15 columns) in my local environment in R, and I am trying to upload this file onto a SQL server:
library(odbc)
library(DBI)
library(RODBC)
dbWriteTable(my_connection, SQL("my_table"), my_table)
But I am not able to upload this entire table on to the server, when I run this code I get the error:
Error in result_insert_dataframe(rs@prt, values): nanodbc/nanodbc.cpp:1587 : HY008 : Operation canceled
However, I noticed that I am able to successfully upload "parts" of this table onto the server:
#this works fine
my_table_1 = my_table[1:1000,]
dbWriteTable(my_connection, SQL("my_table_1"), my_table_1)
This is leading me to believe that somewhere in "my_table" there are some rows which are not being accepted by the server - but I am not sure how to correctly identify these rows.
- Is there some way to either find out which rows are causing the problems, or perhaps is there a way to "trick"/"force" the server to accept these rows?
Here is a summary ("str" command) of the file in R (global environment) :
- var1 : chr
- var2: chr
- var3: chr
- var4: chr
- var 5: chr
- var6: chr
- var7: chr
- var8: chr
- var9: chr
- var10: chr
- var11:chr
- var12:num
- var13:num
- var14:num
- var15: chr
And here is a summary of the variable types for the first 100 rows of this file which were successfully uploaded on to the SQL:
- var1 : CHARACTER VARRYING (255)
- var2: CHARACTER VARRYING (255)
- var3: CHARACTER VARRYING (255)
- var4: CHARACTER VARRYING (255)
- var 5: CHARACTER VARRYING (255)
- var6: CHARACTER VARRYING (255)
- var7: CHARACTER VARRYING (255)
- var8: CHARACTER VARRYING (255)
- var9: CHARACTER VARRYING (255)
- var10: CHARACTER VARRYING (255)
- var11:CHARACTER VARRYING (255)
- var12:DOUBLE PERCISION
- var13:DOUBLE PERCISION
- var14:DOUBLE PERCISION
- var15: CHARACTER VARRYING (255)
Based on these summaries:
- Is there some way to either find out which rows are causing the problems, or perhaps is there a way to "trick"/"force" the server to accept these rows?
Thank you!