5

I have a file with ~ 40 million rows that I need to split based on the first comma delimiter.

The following using the stringr function str_split_fixed works well but is very slow.

library(data.table)
library(stringr)

df1 <- data.frame(id = 1:1000, letter1 = rep(letters[sample(1:25,1000, replace = T)], 40))
df1$combCol1 <- paste(df1$id, ',',df1$letter1, sep = '')
df1$combCol2 <- paste(df1$combCol1, ',', df1$combCol1, sep = '')

st1 <- str_split_fixed(df1$combCol2, ',', 2)

Any suggestions for a faster way to do this?

gagolews
  • 12,836
  • 2
  • 50
  • 75
screechOwl
  • 27,310
  • 61
  • 158
  • 267

1 Answers1

9

Update

The stri_split_fixed function in more recent versions of "stringi" have a simplify argument that can be set to TRUE to return a matrix. Thus, the updated solution would be:

stri_split_fixed(df1$combCol2, ",", 2, simplify = TRUE)

Original answer (with updated benchmarks)

If you are comfortable with the "stringr" syntax and don't want to veer too far from it, but you also want to benefit from a speed boost, try the "stringi" package instead:

library(stringr)
library(stringi)
system.time(temp1 <- str_split_fixed(df1$combCol2, ',', 2))
#    user  system elapsed 
#    3.25    0.00    3.25 
system.time(temp2a <- do.call(rbind, stri_split_fixed(df1$combCol2, ",", 2)))
#    user  system elapsed 
#    0.04    0.00    0.05 
system.time(temp2b <- stri_split_fixed(df1$combCol2, ",", 2, simplify = TRUE))
#    user  system elapsed 
#    0.01    0.00    0.01

Most of the "stringr" functions have "stringi" parallels, but as can be seen from this example, the "stringi" output required one extra step of binding the data to create the output as a matrix instead of as a list.


Here's how it compares with @RichardScriven's suggestion in the comments:

fun1a <- function() do.call(rbind, stri_split_fixed(df1$combCol2, ",", 2))
fun1b <- function() stri_split_fixed(df1$combCol2, ",", 2, simplify = TRUE)
fun2 <- function() {
  do.call(rbind, regmatches(df1$combCol2, regexpr(",", df1$combCol2), 
                            invert = TRUE))
} 

library(microbenchmark)
microbenchmark(fun1a(), fun1b(), fun2(), times = 10)
# Unit: milliseconds
#     expr       min        lq      mean    median        uq       max neval
#  fun1a()  42.72647  46.35848  59.56948  51.94796  69.29920  98.46330    10
#  fun1b()  17.55183  18.59337  20.09049  18.84907  22.09419  26.85343    10
#   fun2() 370.82055 404.23115 434.62582 439.54923 476.02889 480.97912    10
A5C1D2H2I1M1N2O1R2T1
  • 190,393
  • 28
  • 405
  • 485
  • 1
    Could [this](https://github.com/Rexamine/stringi/issues/105) new function help? It's 10x faster than `simplify2array` and it is able to convert matrices from lists of vectors of nonequal lengths. Maybe we should add a `simplify` argument to `stri_split` and `stri_extract` to do such an output-to-matrix conversion (by default=FALSE for backward-compatibility)? With the new `stri_list2matrix` function I get 4x speedup w.r.t. `do.call`. – gagolews Oct 23 '14 at 21:51
  • 2
    I'd say heck yes, that helps. Might be the new `do.call(rbind, ...)` – Rich Scriven Oct 23 '14 at 21:54
  • @RichardScriven: All right, [work in progress](https://github.com/Rexamine/stringi/issues/106) – gagolews Oct 23 '14 at 22:03