I have a question on removing leading a trailing blanks in a data.frame or data.table.
I have working solutions but I'm trying to speed up my code.
Here is some sample data:
number_strings <- paste(" ",seq(from=1, to=100000, by=1)," ",sep="")
data <- as.data.frame(matrix(number_strings,nrow=length(number_strings),ncol=10),stringsAsFactors=FALSE)
colnames(data) <- paste("Col",seq(from=1, to=ncol(data), by=1),sep="")
Here are some columns I would like to trim:
odd_columns <- paste("Col",seq(from=1, to=ncol(data), by=2),sep="")
Here are the three options I have so far:
f_trim_for <- function(x,cols){
for(i in 1:length(cols))
{
x[,cols[i]] = trim(x[,cols[i]])
}
return(x)
}
system.time(data1 <- f_trim_for(data,odd_columns))
f_gsub_for <- function(x,cols){
for(i in 1:length(cols))
{
x[,cols[i]] <- gsub("^\\s+|\\s+$", "", x[,cols[i]], perl = TRUE)
}
return(x)
}
system.time(data2 <- f_gsub_for(data,odd_columns))
f_trim_dt <- function(x,cols){
data.table(x)[, (cols) := trim(.SD), .SDcols = cols]
}
system.time(data3 <- f_trim_dt(data,odd_columns))
Here are the times:
user system elapsed
f_trim_for 1.50 0.08 1.92
f_gsub_for 0.75 0.00 0.74
f_trim_dt 0.81 0.00 1.17
My question: Are there other ways I'm not thinking about that could be faster?
The reason is that my actual data is 1.5 million rows and 110 columns. Hence, speed is a major issue.
I tried some other options but they aren't working:
f_gsub_dt <- function(x,cols){
data.table(x)[, (cols) := gsub("^\\s+|\\s+$", "", .SD, perl = TRUE), .SDcols = cols]
}
f_set_dt <- function(x,cols){
for (j in cols)
{
set(x,x[[j]],j,gsub("^\\s+|\\s+$", "", j, perl = TRUE))
}
return(x)
}