151

I have a dataframe and I would like to count the number of rows within each group. I reguarly use the aggregate function to sum data as follows:

df2 <- aggregate(x ~ Year + Month, data = df1, sum)

Now, I would like to count observations but can't seem to find the proper argument for FUN. Intuitively, I thought it would be as follows:

df2 <- aggregate(x ~ Year + Month, data = df1, count)

But, no such luck.

Any ideas?


Some toy data:

set.seed(2)
df1 <- data.frame(x = 1:20,
                  Year = sample(2012:2014, 20, replace = TRUE),
                  Month = sample(month.abb[1:3], 20, replace = TRUE))
Jaap
  • 81,064
  • 34
  • 182
  • 193
MikeTP
  • 7,716
  • 16
  • 44
  • 57

18 Answers18

100

The tidyverse/dplyr way:

library(dplyr)
df1 %>% count(Year, Month)
Gregor Thomas
  • 136,190
  • 20
  • 167
  • 294
geotheory
  • 22,624
  • 29
  • 119
  • 196
  • Is there a way to aggregate a variable and do counting too (like 2 functions in aggregation: mean + count)? I need to get the mean of a column and the number of rows for the same value in other column – sop May 15 '15 at 14:06
  • 2
    I'd `cbind` the results of `aggregate(Sepal.Length ~ Species, iris, mean)` and `aggregate(Sepal.Length ~ Species, iris, length)` – geotheory May 16 '15 at 22:28
  • I have done it, but it seems that I get 2 times each column except the one that is aggregated; so I have done a merge on them and it seems to be ok – sop May 18 '15 at 07:20
  • 12
    I don't know but this could be useful as well... `df %>% group_by(group, variable) %>% mutate(count = n())` – Manoj Kumar Dec 14 '16 at 17:57
  • 3
    Yes dplyr is best practice now. – geotheory Dec 15 '16 at 02:07
  • 4
    I'm a daily dplyr user but still wouldn't call it necessarily best practice, more like common personal preference – camille Sep 10 '21 at 21:10
  • You are perfectly right - dplyr isn't best for all cases, e.g. data.table or poorman might be preferable. And what does 'best practice' mean anyway? – geotheory Sep 12 '21 at 00:33
83

Following @Joshua's suggestion, here's one way you might count the number of observations in your df dataframe where Year = 2007 and Month = Nov (assuming they are columns):

nrow(df[,df$YEAR == 2007 & df$Month == "Nov"])

and with aggregate, following @GregSnow:

aggregate(x ~ Year + Month, data = df, FUN = length)
Ben
  • 41,615
  • 18
  • 132
  • 227
59

dplyr package does this with count/tally commands, or the n() function:

First, some data:

df <- data.frame(x = rep(1:6, rep(c(1, 2, 3), 2)), year = 1993:2004, month = c(1, 1:11))

Now the count:

library(dplyr)
count(df, year, month)
#piping
df %>% count(year, month)

We can also use a slightly longer version with piping and the n() function:

df %>% 
  group_by(year, month) %>%
  summarise(number = n())

or the tally function:

df %>% 
  group_by(year, month) %>%
  tally()
smci
  • 32,567
  • 20
  • 113
  • 146
jeremycg
  • 24,657
  • 5
  • 63
  • 74
43

An old question without a data.table solution. So here goes...

Using .N

library(data.table)
DT <- data.table(df)
DT[, .N, by = list(year, month)]
mnel
  • 113,303
  • 27
  • 265
  • 254
  • 4
    standard nowadays to use `.()` instead of `list()` and `setDT()` to convert a data.frame to data.table. So in one step `setDT(df)[, .N, by = .(year, month)]`. – s_baldur Sep 27 '19 at 11:33
25

The simple option to use with aggregate is the length function which will give you the length of the vector in the subset. Sometimes a little more robust is to use function(x) sum( !is.na(x) ).

Greg Snow
  • 48,497
  • 6
  • 83
  • 110
22

Create a new variable Count with a value of 1 for each row:

df1["Count"] <-1

Then aggregate dataframe, summing by the Count column:

df2 <- aggregate(df1[c("Count")], by=list(Year=df1$Year, Month=df1$Month), FUN=sum, na.rm=TRUE)
thelatemail
  • 91,185
  • 12
  • 128
  • 188
L Tyrone
  • 1,268
  • 3
  • 15
  • 24
  • Just to note that if you are using the default, non-formula method for `aggregate`, there is no need to rename each variable in `by=` like `list(year=df1$year)` etc. A `data.frame` is a `list` already so `aggregate(df1[c("Count")], by=df1[c("Year", "Month")], FUN=sum, na.rm=TRUE)` will work. – thelatemail Jul 17 '19 at 22:27
19

An alternative to the aggregate() function in this case would be table() with as.data.frame(), which would also indicate which combinations of Year and Month are associated with zero occurrences

df<-data.frame(x=rep(1:6,rep(c(1,2,3),2)),year=1993:2004,month=c(1,1:11))

myAns<-as.data.frame(table(df[,c("year","month")]))

And without the zero-occurring combinations

myAns[which(myAns$Freq>0),]
BenBarnes
  • 19,114
  • 6
  • 56
  • 74
8

If you want to include 0 counts for month-years that are missing in the data, you can use a little table magic.

data.frame(with(df1, table(Year, Month)))

For example, the toy data.frame in the question, df1, contains no observations of January 2014.

df1
    x Year Month
1   1 2012   Feb
2   2 2014   Feb
3   3 2013   Mar
4   4 2012   Jan
5   5 2014   Feb
6   6 2014   Feb
7   7 2012   Jan
8   8 2014   Feb
9   9 2013   Mar
10 10 2013   Jan
11 11 2013   Jan
12 12 2012   Jan
13 13 2014   Mar
14 14 2012   Mar
15 15 2013   Feb
16 16 2014   Feb
17 17 2014   Mar
18 18 2012   Jan
19 19 2013   Mar
20 20 2012   Jan

The base R aggregate function does not return an observation for January 2014.

aggregate(x ~ Year + Month, data = df1, FUN = length)
  Year Month x
1 2012   Feb 1
2 2013   Feb 1
3 2014   Feb 5
4 2012   Jan 5
5 2013   Jan 2
6 2012   Mar 1
7 2013   Mar 3
8 2014   Mar 2

If you would like an observation of this month-year with 0 as the count, then the above code will return a data.frame with counts for all month-year combinations:

data.frame(with(df1, table(Year, Month)))
  Year Month Freq
1 2012   Feb    1
2 2013   Feb    1
3 2014   Feb    5
4 2012   Jan    5
5 2013   Jan    2
6 2014   Jan    0
7 2012   Mar    1
8 2013   Mar    3
9 2014   Mar    2
lmo
  • 37,904
  • 9
  • 56
  • 69
5

A solution using sqldf package:

library(sqldf)
sqldf("SELECT Year, Month, COUNT(*) as Freq
       FROM df1
       GROUP BY Year, Month")
M--
  • 25,431
  • 8
  • 61
  • 93
5

Using collapse package in R

library(collapse)
library(magrittr)
df %>% 
    fgroup_by(year, month) %>%
    fsummarise(number = fNobs(x))
akrun
  • 874,273
  • 37
  • 540
  • 662
4

For my aggregations I usually end up wanting to see mean and "how big is this group" (a.k.a. length). So this is my handy snippet for those occasions;

agg.mean <- aggregate(columnToMean ~ columnToAggregateOn1*columnToAggregateOn2, yourDataFrame, FUN="mean")
agg.count <- aggregate(columnToMean ~ columnToAggregateOn1*columnToAggregateOn2, yourDataFrame, FUN="length")
aggcount <- agg.count$columnToMean
agg <- cbind(aggcount, agg.mean)
maze
  • 335
  • 4
  • 9
3
library(tidyverse)

df_1 %>%
  group_by(Year, Month) %>%
  summarise(count= n()) 
woody70
  • 85
  • 9
1

Considering @Ben answer, R would throw an error if df1 does not contain x column. But it can be solved elegantly with paste:

aggregate(paste(Year, Month) ~ Year + Month, data = df1, FUN = NROW)

Similarly, it can be generalized if more than two variables are used in grouping:

aggregate(paste(Year, Month, Day) ~ Year + Month + Day, data = df1, FUN = NROW)
paudan
  • 36
  • 1
0

You can use by functions as by(df1$Year, df1$Month, count) that will produce a list of needed aggregation.

The output will look like,

df1$Month: Feb
     x freq
1 2012    1
2 2013    1
3 2014    5
--------------------------------------------------------------- 
df1$Month: Jan
     x freq
1 2012    5
2 2013    2
--------------------------------------------------------------- 
df1$Month: Mar
     x freq
1 2012    1
2 2013    3
3 2014    2
> 
helcode
  • 1,859
  • 1
  • 13
  • 32
0

There are plenty of wonderful answers here already, but I wanted to throw in 1 more option for those wanting to add a new column to the original dataset that contains the number of times that row is repeated.

df1$counts <- sapply(X = paste(df1$Year, df1$Month), 
                     FUN = function(x) { sum(paste(df1$Year, df1$Month) == x) })

The same could be accomplished by combining any of the above answers with the merge() function.

filups21
  • 1,611
  • 1
  • 19
  • 22
0

If your trying the aggregate solutions above and you get the error:

invalid type (list) for variable

Because you're using date or datetime stamps, try using as.character on the variables:

aggregate(x ~ as.character(Year) + Month, data = df, FUN = length)

On one or both of the variables.

0

I usually use table function


df <- data.frame(a=rep(1:8,rep(c(1,2,3, 4),2)),year=2011:2021,month=c(1,3:10))

new_data <- as.data.frame(table(df[,c("year","month")]))

Ibrahimli
  • 211
  • 3
  • 11
0

Two very fast collapse options are GRPN and fcount. fcount is a fast version of dplyr::count and uses the same syntax. You can use add = TRUE to add it a as a column (mutate-like):

library(collapse)
fcount(df1, Year, Month) #or df1 %>% fcount(Year, Month)

#   Year Month N
# 1 2012   Feb 4
# 2 2014   Jan 3
# 3 2013   Mar 2
# 4 2013   Feb 2
# 5 2012   Jan 2
# 6 2012   Mar 2
# 7 2013   Jan 1
# 8 2014   Feb 3
# 9 2014   Mar 1

GRPN is closer to collapse's original syntax. First, group the data with GRP. Then use GRPN. By default, GRPN creates an expanded vector that match the original data. (In dplyr, it would be equivalent to using mutate). Use expand = FALSE to output the summarized vector.

library(collapse)
GRPN(GRP(df1, .c(Year, Month)), expand = FALSE)

Microbenchmark with a 100,000 x 3 data frame and 4997 different groups. collapse::fcount is much faster than any other option.

library(collapse)
library(dplyr)
library(data.table)
library(microbenchmark)

set.seed(1)
df <- data.frame(x = gl(1000, 100),
           y = rbinom(100000, 4, .5),
           z = runif(100000))
dt <- df

mb <- 
  microbenchmark(
  aggregate = aggregate(z ~ x + y, data = df, FUN = length),
  count = count(df, x, y),
  data.table = setDT(dt)[, .N, by = .(x, y)],
  'collapse::fnobs' = df %>% fgroup_by(x, y) %>% fsummarise(number = fnobs(z)),
  'collapse::GRPN' = GRPN(GRP(df, .c(x, y)), expand = FALSE),
  'collapse::fcount' = fcount(df, x, y)
)

# Unit: milliseconds
#             expr      min        lq       mean    median        uq      max neval
#        aggregate 159.5459 203.87385 227.787186 223.93050 246.36025 335.0302   100
#            count  55.1765  63.83560  74.715889  73.60195  79.20170 196.8888   100
#       data.table   8.4483  15.57120  18.308277  18.10790  20.65460  31.2666   100
#  collapse::fnobs   3.3325   4.16145   5.695979   5.18225   6.27720  22.7697   100
#   collapse::GRPN   3.0254   3.80890   4.844727   4.59445   5.50995  13.6649   100
# collapse::fcount   1.2222   1.57395   3.087526   1.89540   2.47955  22.5756   100
Maël
  • 45,206
  • 3
  • 29
  • 67