7

I have a data frame with 2 million rows, and 15 columns. I want to group by 3 of these columns with ddply (all 3 are factors, and there are 780,000 unique combinations of these factors), and get the weighted mean of 3 columns (with weights defined by my data set). The following is reasonably quick:

system.time(a2 <- aggregate(cbind(col1,col2,col3) ~ fac1 + fac2 + fac3, data=aggdf, FUN=mean))
   user  system elapsed 
 91.358   4.747 115.727 

The problem is that I want to use weighted.mean instead of mean to calculate my aggregate columns.

If I try the following ddply on the same data frame (note, I cast to immutable), the following does not finish after 20 minutes:

x <- ddply(idata.frame(aggdf), 
       c("fac1","fac2","fac3"), 
       summarise, 
       w=sum(w), 
       col1=weighted.mean(col1, w), 
       col2=weighted.mean(col2, w),
       col3=weighted.mean(col3, w))

This operation seems to be CPU hungry, but not very RAM-intensive.

EDIT: So I ended up writing this little function, which "cheats" a bit by taking advantage of some properties of weighted mean and does a multiplication and a division on the whole object, rather than on the slices.

weighted_mean_cols <- function(df, bycols, aggcols, weightcol) {
    df[,aggcols] <- df[,aggcols]*df[,weightcol]
    df <- aggregate(df[,c(weightcol, aggcols)], by=as.list(df[,bycols]), sum)
    df[,aggcols] <- df[,aggcols]/df[,weightcol]
    df
}

When I run as:

a2 <- weighted_mean_cols(aggdf, c("fac1","fac2","fac3"), c("col1","col2","col3"),"w")

I get good performance, and somewhat reusable, elegant code.

evanrsparks
  • 363
  • 3
  • 13
  • 3
    There are heaps and heaps of plyr optimization tips in [this question](http://stackoverflow.com/questions/3685492/r-speeding-up-group-by-operations). Also, don't forget that you can run `ddply` in parallel by linking it to the `foreach` package. – Matt Parker Mar 09 '11 at 19:16
  • Have seen that - tried the tricks I liked, not the ones I didn't. Instead, I went with the above edit which uses base R, remains fairly flexible, and executes quickly (still under 2 minutes). Would still love an explanation for why this is slow in ddply - love the syntax and parallelism features! – evanrsparks Mar 09 '11 at 20:54
  • 4
    `ddply` is so slow because it works with data frames, which are unfortunately rather slow. The faster approaches work directly with vectors, which are much much faster – hadley Mar 10 '11 at 14:01

2 Answers2

5

Though ddply is hard to beat for elegance and ease of code, I find that for big data, tapply is much faster. In your case, I would use a

do.call("cbind", list((w <- tapply(..)), tapply(..)))

Sorry for the dots and possibly faulty understanding of the question; but I am in a bit of a rush and must catch a bus in about minus five minutes!

crayola
  • 1,668
  • 13
  • 16
2

If you're going to use your edit, why not use rowsum and save yourself a few minutes of execution time?

nr <- 2e6
nc <- 3
aggdf <- data.frame(matrix(rnorm(nr*nc),nr,nc),
                    matrix(sample(100,nr*nc,TRUE),nr,nc), rnorm(nr))
colnames(aggdf) <- c("col1","col2","col3","fac1","fac2","fac3","w")

system.time({
aggsums <- rowsum(data.frame(aggdf[,c("col1","col2","col3")]*aggdf$w,w=aggdf$w), 
  interaction(aggdf[,c("fac1","fac2","fac3")]))
agg_wtd_mean <- aggsums[,1:3]/aggsums[,4]
})
#   user  system elapsed 
#  16.21    0.77   16.99 
Joshua Ulrich
  • 173,410
  • 32
  • 338
  • 418
  • If you up-vote this answer, be sure to up-vote [Marek's answer](http://stackoverflow.com/questions/3685492/r-speeding-up-group-by-operations/3686241#3686241) too... – Joshua Ulrich Mar 09 '11 at 21:57
  • This is nice and efficient, thank you! My real aim here is to get an idea of what makes that operation so slow in ddply, but I guess I could do some profiling and figure out why. – evanrsparks Mar 09 '11 at 22:41