Took me a while to figure this out. So, I am answering my own question.
You have some .csv, you want to load it fast, you want to use the sqldf
package. Your usual code is irritated by a few annoying fields. Example:
1001, Amy,9:43:00, 99.2
1002,"Ben,Jr",9:43:00, 99.2
1003,"Ben,Sr",9:44:00, 99.3
This code only works on *nix systems.
library(sqldf)
system("touch temp.csv")
system("echo '1001, Amy,9:43:00, 99.2\n1002,\"Ben,Jr\",9:43:00, 99.2\n1003,\"Ben,Sr\",9:44:00, 99.3' > temp.csv")
If try to read with
x <- read.csv.sql("temp.csv", header=FALSE)
R complains
Error in try({ :
RS-DBI driver: (RS_sqlite_import: ./temp.csv line 2 expected 4 columns of data but found 5)
The sqldf
-FAQ.13 solution doesn't work either:
x <- read.csv.sql("temp.csv", filter = "tr -d '\"' ", header=FALSE)
Again, R complains
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings, :
line 1 did not have 5 elements
In fact, the filter only removes double quotes.
So, how to proceed?