7

In R, I have a table with Location, sample_year and count. So,

Location sample_year count  
A        1995        1
A        1995        1  
A        2000        3  
B        2000        1  
B        2000        1  
B        2000        5

I want a summary table that examines both the 'Location' and 'sample_year' columns and sums 'count' dependent on this unique combination instead of just a single column. So, end result should be:

Location sample_year sum_count
A        1995        2
A        2000        3
B        2000        7

I could merge columns and data into a new column to create unique a Location-sample_year but this is not a clean solution, esp if I need to scale it up to three columns at some point. There must be a better approach.

Brian Tompsett - 汤莱恩
  • 5,753
  • 72
  • 57
  • 129
DeLongTime
  • 71
  • 1
  • 2
  • 2
    This is not really a tapply() question, perhaps edit the title to be more general? You can get the sum with tapply, but still need to clean up the aggregated columns: tapply(x$count, paste(x$Location, x$sample_year), sum) – mdsumner Mar 07 '11 at 05:43

3 Answers3

11

You can use aggregate with a formula.

First the data:

x <- read.table(textConnection("Location sample_year count  
A        1995        1
A        1995        1  
A        2000        3  
B        2000        1  
B        2000        1  
B        2000        5"), header = TRUE)

Aggregate using sum with a formula specifying the grouping:

aggregate(count ~ Location+sample_year, data = x, sum)
    Location sample_year count
1        A        1995     2
2        A        2000     3
3        B        2000     7
mdsumner
  • 29,099
  • 6
  • 83
  • 91
  • +1 Neat one-liner... It's always better to use functions available in base R distribution. – aL3xa Mar 07 '11 at 05:47
  • I would not say "always", there are easier to use tools like reshape (and plyr, and stringr, raster, ...) that can allow you to do more in smarter ways, I just haven't made the effort to learn and I get by with the older stuff because I just learnt that way. If I knew the new stuff I would suggest it as well, I voted on your answer! – mdsumner Mar 07 '11 at 05:54
  • IMHO, `reshape` pays off only in case of complex aggregation. `plyr` is another cool package that's really worth learning. – aL3xa Mar 07 '11 at 07:36
4

Or with reshape package:

library(reshape)
md <- melt(x, measure.vars = "count")
cast(md, Location + sample_year ~ variable, sum)
  Location sample_year count
1        A        1995     2
2        A        2000     3
3        B        2000     7

EDIT:

I used object x from @mdsumner's answer. Anyway... I recommend you to stick with his answer, since it doesn't depend on external packages (aggregate function comes bundled with R, unless you detach stats package...). And, BTW, it's faster than reshape solution.

aL3xa
  • 35,415
  • 18
  • 79
  • 112
2

Or with plyr (using x from @mdsummer)

library(plyr)
ddply(x, .(Location,sample_year), summarise, count = sum(count))
Chase
  • 67,710
  • 18
  • 144
  • 161