I have this data set:
var_1 = rnorm(1000,1000,1000)
var_2 = rnorm(1000,1000,1000)
var_3 = rnorm(1000,1000,1000)
sample_data = data.frame(var_1, var_2, var_3)
I broke this data set into groups of 100 - thus creating 10 mini datasets:
list_of_dfs <- split(
sample_data, (seq(nrow(sample_data))-1) %/% 100
)
table_names <- paste0("sample_", 1:10)
I know want to upload these 10 mini datasets onto an SQL server :
library(odbc)
library(DBI)
library(RODBC)
library(purrr)
#establish connection
map2(
table_names,
list_of_dfs,
function(x,y) dbWriteTable(connection, x, y)
)
The problem is, that one of these mini datasets (e.g. sample_6) is not being accepted by the SQL server and gives this error:
Error in result_insert_dataframe(rs@prt, values): nanodbc/nanodbc.cpp:1587 : HY008 : Operation canceled
This means that "sample_1", "sample_2", "sample_3", "sample_4", "sample_5" were all successfully uploaded - but since "sample_6" was rejected, "sample_7", "sample_8", "sample_9" and "sample_10".
- Is there a way to "override" this error and ensure that if one of these "sample_i" data sets are rejected, the computer will skip this data set and attempt to upload the remaining datasets?
If I were to do this manually, I could just "force" R to skip over the problem data set. For example, imagine if "sample_2" was causing the problem:
dbWriteTable(my_connection, SQL("sample_1"), sample_1)
dbWriteTable(my_connection, SQL("sample_2"), sample_2)
Error in result_insert_dataframe(rs@prt, values): nanodbc/nanodbc.cpp:1587 : HY008 : Operation canceled
dbWriteTable(my_connection, SQL("sample_3"), sample_3)
In the above code, "sample_1" and "sample_3" are successfully uploaded even though "sample_2" was causing a problem.
- Is it possible to override these errors when "bulk-uploading" the datasets?
Thank you!