0

I have a data.frame like this:

DqStr <- "Group   q        Dq       SD.Dq
1 -3.0 0.7351 0.0067
1 -2.5 0.6995 0.0078
1 -2.0 0.6538 0.0093
2 -3.0 0.7203 0.0081
2 -2.5 0.6829 0.0094
2 -2.0 0.6350 0.0112"
Dq1 <- read.table(textConnection(DqStr), header=TRUE)

I would like to randomize group membership but only for rows with the same value of Dq1$q

g <-unique(Dq1$q)
Dq2<- data.frame()
for(n in g)
{
  Dqq <- Dq1[Dq1$q==n,]
  Dqq$Group <-sample(Dqq$Group)
  Dq2 <- rbind(Dq2,Dqq)    
}

That could also be done with plyr

library(plyr)
ddply(Dq1,.(q), function(x) { x$Group <- sample(x$Group)
                              data.frame(x)})

as I have to repeat this thousands times I wonder if there are a better (faster) way to do it.

Leosar
  • 2,010
  • 4
  • 21
  • 32

2 Answers2

5

If I'm understanding your question correctly, this data.table solution will also work:

library(data.table)
Dq1 <- as.data.table(Dq1)
Dq1[, Group := sample(Group), by = q]

Adding to Robert's benchmark above:

library(plyr)
library(data.table)

your_code <- function() { g <-unique(Dq1$q); Dq2<- data.frame(); for(n in g) { Dqq <- Dq1[Dq1$q==n,]; Dqq$Group <-sample(Dqq$Group); Dq2 <- rbind(Dq2,Dqq) } }
plyr_code <- function() { ddply(Dq1,.(q), function(x) { x$Group <- sample(x$Group); data.frame(x)}) }
base_code <- function() { Dq1$Group <- with(Dq1, ave(Group, q, FUN = sample)) }
data.table_code <- function() { Dq1 <- as.data.table(Dq1); Dq1[, Group := sample(Group), by = q] }

library(microbenchmark)
microbenchmark(your_code(), plyr_code(), base_code(), data.table_code())

Results:

    Unit: milliseconds
              expr      min       lq   median       uq      max neval
       your_code() 6.290822 6.771324 6.848123 6.966648 9.639748   100
       plyr_code() 3.124676 3.307456 3.356095 3.455422 4.564390   100
       base_code() 1.168874 1.301224 1.326055 1.348327 2.269652   100
 data.table_code() 1.124844 1.157866 1.180649 1.209577 1.419750   100

For a data set this small, data.table is not clearly superior. But if you have many rows (and if you use fread to read in your data as a data.table to start with), you'll see significant speedups over plyr, and some speedups over base R. So don't take this benchmark too seriously.

Edit: changed to use as.data.table() instead of data.table(), per Arun's comment.

Frank
  • 2,386
  • 17
  • 26
  • I've never used data.table, it seems to be a very interesting package. My datasets are not so big so I think the base code solution it's the best! Thanks! – Leosar Aug 01 '14 at 22:51
  • If the base code solution is the best for your situation, I encourage you to accept that answer rather than this one. – Frank Aug 01 '14 at 23:20
  • Your answer has all the options and I think it's more complete. – Leosar Aug 02 '14 at 02:22
  • 1
    +1 A small note: I suggest using `as.data.table()` instead of `data.table()`, wherever specific S3 methods are available. Also `setDT` exists to convert data.frame to data.table by reference. At this granularity, you're measuring just the time to convert to `data.table`. Benchmarking at `usec` is not that useful. – Arun Aug 12 '14 at 19:37
3

With base R, you could use ave:

Dq1$Group <- with(Dq1, ave(Group, q, FUN = sample))

How fast is it?

library(plyr); 

your_code <- function() { g <-unique(Dq1$q); Dq2<- data.frame(); for(n in g) { Dqq <- Dq1[Dq1$q==n,]; Dqq$Group <-sample(Dqq$Group); Dq2 <- rbind(Dq2,Dqq) } }
plyr_code <- function() { ddply(Dq1,.(q), function(x) { x$Group <- sample(x$Group); data.frame(x)}) }
base_code <- function() { Dq1$Group <- with(Dq1, ave(Group, q, FUN = sample)) }

library(microbenchmark)
microbenchmark(your_code(), plyr_code(), base_code())

Results:

 Unit: microseconds
         expr      min        lq    median        uq      max neval
  your_code()  745.592  855.3770  897.8580  956.0490 2981.026   100
  plyr_code() 2054.471 2186.2665 2259.6075 2530.7875 4771.403   100
  base_code()  216.323  239.0185  260.6925  282.8625  681.794   100
Robert Krzyzanowski
  • 9,294
  • 28
  • 24