One of the design patterns I use over and over is performing a "group by" or "split, apply, combine (SAC)" on a data frame and then joining the aggregated data back to the original data. This is useful, for example, when calculating each county's deviation from the state mean in a data frame with many states and counties. Rarely is my aggregate calculation only a simple mean, but it makes a good example. I often solve this problem the following way:
require(plyr)
set.seed(1)
## set up some data
group1 <- rep(1:3, 4)
group2 <- sample(c("A","B","C"), 12, rep=TRUE)
values <- rnorm(12)
df <- data.frame(group1, group2, values)
## got some data, so let's aggregate
group1Mean <- ddply( df, "group1", function(x)
data.frame( meanValue = mean(x$values) ) )
df <- merge( df, group1Mean )
df
Which produces nice aggregate data like the following:
> df
group1 group2 values meanValue
1 1 A 0.48743 -0.121033
2 1 A -0.04493 -0.121033
3 1 C -0.62124 -0.121033
4 1 C -0.30539 -0.121033
5 2 A 1.51178 0.004804
6 2 B 0.73832 0.004804
7 2 A -0.01619 0.004804
8 2 B -2.21470 0.004804
9 3 B 1.12493 0.758598
10 3 C 0.38984 0.758598
11 3 B 0.57578 0.758598
12 3 A 0.94384 0.758598
This works, but are there alternative ways of doing this which improve on readability, performance, etc?