I am currently trying to download a large database using RPostgres - however, when I try to write the results to csvs in batches, it writes the same data over and over (always the first 1M records). Here is the code I am currently using:
wrds <- dbConnect(Postgres(),
host='wrds-pgdata.wharton.upenn.edu',
port=9737,
dbname='wrds',
sslmode='require',
user='username')
res <- dbSendQuery(wrds, "SELECT VARNAMES FROM DATABASE WHERE CONDITIONS")
i = 0
while (!dbHasCompleted(res)) {
i <- i+1
chunk <- dbFetch(res, 1000000) # Fetch 1,000,000 at a time
filename <- paste("path\\EuropeanSMEs_", i)
write.csv(data, filename, row.names = FALSE)
}
dbClearResult(res)
dbDisconnect(wrds)
How do I amend the code that it fetches a new set of 1M rows on every iteration?