4

This question relates to this question, and answer by Akrun.

I have wide data with nested columns that I'm converting to long format. The data are in the following partially long format:

  id   var value
  1 diag1     m
  1 diag2     h
  1 diag3     k
  1 diag4     r
  1 diag5     c
  1 diag6     f
  1 opa1      s
  1 opa2      f

and I would like to get them in the following true long format:

  id diag number value
  1 diag      1     m
  1 diag      2     h
  1 diag      3     k
  1 diag      4     r
  1 diag      5     c
  1 diag      6     f
  1 opa       1     s
  1 opa       2     f

The following code achieves this for a smaller number of rows, but my data are a bit more complex (15 digit id, 5 digit value), and I have 634 million rows.

For my data, it takes about 3 seconds for 100 rows, and crashes on anything over 1,000 rows.

Here is some sample, reproducible code with timing

library(tidyr)
set.seed(10)
n = 100
diags <- paste("diag", 1:25, sep="")
poas <-paste("poa", 1:25, sep="")
var <- c(diags, poas)

dat <- data.frame(id = rep(1:50, each=n), var = rep(var, 5), value = letters[sample(1:25,25*n, replace = T)])

datlong <- dat %>%
  extract(var, c('diag', 'number'), 
          '([a-z]+)([0-9]+)')

n      user    system  elapsed 
10^2   0.011   0.006   0.026
10^3   0.041   0.010   0.066
10^4   0.366   0.055   0.421
10^5   3.969   0.445   4.984 
10^6   40.777  13.840  60.969 

My dataframe looks like this:

str(realdata)
'data.frame':   634358112 obs. of  3 variables:
 $ visitId: Factor w/ 12457767 levels "---------_1981-07-28",..: 8333565 5970358 158415 5610904 3422522 10322908 10973353 10921570 919501 4639482 ...
 $ var    : Factor w/ 48 levels "odiag1","odiag2",..: 1 1 1 1 1 1 1 1 1 1 ...
 $ value  : chr  "42732" "0389" "20280" "9971" ...

I've tried converting the value field to a factor as well, with similar results.

Is there a more efficient way of getting this done?

UPDATE: Result with separate as suggested by @Richard

n      user    system  elapsed 
10^2   0.010   0.001   0.010 
10^3   0.081   0.003   0.084
10^4   0.797   0.011   0.811 
10^5   9.703   0.854  11.041 
10^6   138.401 6.301 146.613

Result with data.table as suggested by Akrun

n      user    system  elapsed 
10^2   0.018   0.001   0.019  
10^3   0.074   0.002   0.076
10^4   0.598   0.024   0.619 
10^5   6.478   0.348   6.781 
10^6   73.581   2.661  75.749

Result with fread as suggested by Akrun

n      user    system  elapsed 
10^2   0.019   0.001   0.019  
10^3   0.065   0.003   0.067 
10^4   0.547   0.011   0.547 
10^5   5.321   0.164   5.446  
10^6   52.362   1.363  53.312 
Community
  • 1
  • 1
ano
  • 704
  • 5
  • 15
  • 2
    I think using `strsplit` might be faster. i.e. `strsplit(df1$var, '(?<=[^0-9])(?=[0-9])', perl=TRUE)` – akrun Dec 11 '15 at 18:37
  • 2
    What about `separate(dat, var, c("diag", "number"), sep = "(?<=[^0-9])(?=[0-9])", perl = TRUE)`? Seems to get you there. – Rich Scriven Dec 11 '15 at 18:41
  • @jeremycg I don't need to split the `id` or `value` fields, which are fixed length, but the `var` field, which is not a fixed length – ano Dec 11 '15 at 18:52
  • is your `var` always `odiag` or `poa`? Then you could do: `dat %>% extract(var, c('diag', 'number'), '(odiag|poa)(.)') ` which is mildly faster than the alternatives – jeremycg Dec 11 '15 at 19:59
  • @jeremycg yes, my data is always `odiag` or `poa`...I had tried to generalize the question in case others would find it useful. will try your approach – ano Dec 12 '15 at 01:13

3 Answers3

4

We can try with tstrsplit from data.table

library(data.table)#v1.9.6+
setDT(df1)[, c('diag', 'number') := tstrsplit(var,
             '(?<=[^0-9])(?=[0-9])', perl=TRUE)]

Or create a delimiter between the character and numeric element, then read with fread

fread(paste(sub('(\\d+)$', ',\\1', df1$var), collapse='\n'), 
                 col.names=c('diag', 'number'))
akrun
  • 874,273
  • 37
  • 540
  • 662
4

I'd tackle this in two steps. Once you have the sample data:

library(tidyr)
library(dplyr)
n <- 1e5
vars <- paste0(c("diag", "poa"), rep(1:25, each = 2))

dat <- data_frame(
  id = rep(1:50, each = n / 50), 
  var = rep(vars, length = n), 
  value = letters[sample(25, n, replace = TRUE)]
)

Extract the unique variable names, and use your original approach:

labels <- dat %>% 
  select(var) %>% 
  distinct() %>% 
  extract(var, c('diag', 'number'), '([a-z]+)([0-9]+)', remove = FALSE)
labels
#> Source: local data frame [50 x 3]
#> 
#>      var  diag number
#>    (chr) (chr)  (chr)
#> 1  diag1  diag      1
#> 2   poa1   poa      1
#> 3  diag2  diag      2
#> 4   poa2   poa      2
#> 5  diag3  diag      3
#> 6   poa3   poa      3
#> 7  diag4  diag      4
#> 8   poa4   poa      4
#> 9  diag5  diag      5
#> 10  poa5   poa      5
#> ..   ...   ...    ...

Then use a join to add that back to the original dataset:

dat <- dat %>% 
  left_join(labels) %>% 
  select(-var)
#> Joining by: "var"
dat
#> Source: local data frame [100,000 x 4]
#> 
#>       id value  diag number
#>    (int) (chr) (chr)  (chr)
#> 1      1     h  diag      1
#> 2      1     s   poa      1
#> 3      1     x  diag      2
#> 4      1     q   poa      2
#> 5      1     x  diag      3
#> 6      1     e   poa      3
#> 7      1     t  diag      4
#> 8      1     b   poa      4
#> 9      1     n  diag      5
#> 10     1     t   poa      5
#> ..   ...   ...   ...    ...
hadley
  • 102,019
  • 32
  • 183
  • 245
  • I'm not sure why, but the above code (and all the examples other people have provided as well) works well on the sample data above, but hangs my R session when I use my real data up to 1,000 rows. things work fine for 1 few hundred rows, but >1k and it hangs. Since my labels are predictable, for this situation, I've just made generated a dataframe, skipping @hadley's first step, and then the join only takes 16 seconds for 200million rows – ano Dec 14 '15 at 19:13
1

Here's a way in which we can do a little preprocessing, and thus speed up the actual conversion. This way we only do the strsplit once, and then use a lookup to get the values.

It's slower at low numbers of rows, but around 6 times faster at 5*10^5

I'm assuming the column var is a factor. If not, try

dat$var <- as.factor(dat$var)

First, split the levels of the factor:

diag <- sapply(levels(dat$var), function(x) strsplit(x, '(?<=[^0-9])(?=[0-9])', perl=TRUE)[[1]][[1]])
number <-  as.numeric(sapply(levels(dat$var), function(x) strsplit(x, '(?<=[^0-9])(?=[0-9])', perl=TRUE)[[1]][[2]]))

Then grab the correct one for each, by coercing dat$var to a numeric:

dat$number <- number[as.numeric(dat$var)]
dat$diag <- diag[as.numeric(dat$var)]

Here's a benchmark for 5*10^6:

set.seed(10)
n = 10000
diags <- paste("diag", 1:25, sep="")
poas <-paste("poa", 1:25, sep="")
var <- c(diags, poas)

dat <- data.frame(id = rep(1:50, each=n), var = rep(var, 5), value = letters[sample(1:25,25*n, replace = T)])

microbenchmark::microbenchmark(
  factors = {
    diag <- sapply(levels(dat$var), function(x) strsplit(x, '(?<=[^0-9])(?=[0-9])', perl=TRUE)[[1]][[1]])
    number <-  as.numeric(sapply(levels(dat$var), function(x) strsplit(x, '(?<=[^0-9])(?=[0-9])', perl=TRUE)[[1]][[2]])) 
    dat$number <- number[as.numeric(dat$var)]
    dat$diag <- diag[as.numeric(dat$var)]
  },
  extract = {
    dat %>% extract(var, c('diag', 'number'),'([a-z]+)([0-9]+)')
  }
)
Unit: milliseconds
    expr       min        lq     mean    median       uq       max neval cld
 factors  51.70709  67.46106 110.5191  77.67737 169.0687  304.3777   100  a 
 extract 599.76868 635.70298 702.1213 660.78699 748.7519 1111.4843   100   b
jeremycg
  • 24,657
  • 5
  • 63
  • 74