3

I have a data.table that is quite large. I would like to do some aggregation on only select rows but using all of the data (i.e. not just the select rows). Here is an example:

library(data.table)
library(lubridate)
dt = data.table(
    date = seq.Date(as.Date("2017-01-01"), as.Date("2017-12-31"), by = "1 day")
)

dt$day = day(dt$date)
dt$value = rnorm(nrow(dt))

What I would like is to take a 30 day rolling average. Normally this can be accomplished by the following:

library(RcppRoll)
ma30 = dt[, roll_mean(value, 30, fill = NA, align = "right"), by = day]

However, in this case I am only concerned with the rolling average when day is equal to 15. Is there some way to write the above statement such that I can take the average of all previous 30 days but only at the 15th of each month? In other words I want to use 365 data points but only make 12 calculations (or 11 since the first would be NA anyway).

Thanks in advance.

Jaap
  • 81,064
  • 34
  • 182
  • 193
fsumathguy
  • 95
  • 6

2 Answers2

2

Two possible approaches:

# option 1:
dt[, roll_mn := roll_mean(value, 30, fill = NA, align = "right") * NA^(day != 15)]

# option 2:
dt[, roll_mn := ifelse(day == 15, roll_mean(value, 30, fill = NA, align = "right"), NA)]

you get:

> dt[1:100]
           date day        value    roll_mn
  1: 2017-01-01   1 -0.422983983         NA
  2: 2017-01-02   2 -1.549878162         NA
....
 13: 2017-01-13  13  0.712481269         NA
 14: 2017-01-14  14 -0.445772094         NA
 15: 2017-01-15  15  0.248979648         NA
 16: 2017-01-16  16 -1.074193951         NA
 17: 2017-01-17  17 -1.827261716         NA
....
 44: 2017-02-13  13  1.054362321         NA
 45: 2017-02-14  14 -0.148639594         NA
 46: 2017-02-15  15  1.018076577 -0.1322037
 47: 2017-02-16  16 -0.721586512         NA
 48: 2017-02-17  17 -0.778778137         NA
....
 72: 2017-03-13  13  0.565180699         NA
 73: 2017-03-14  14 -0.006097837         NA
 74: 2017-03-15  15 -0.438781066  0.1109928
 75: 2017-03-16  16  0.688891096         NA
 76: 2017-03-17  17 -0.499419195         NA
....
 99: 2017-04-09   9 -0.657354771         NA
100: 2017-04-10  10  0.922903744         NA

A benchmark on a lager dataset (including the non-equi join option mentioned by @Frank in the comments):

# create benchmark dataset
set.seed(2018)
dt <- data.table(date = seq.Date(as.Date("0-01-01"), as.Date("2017-12-31"), by = "1 day"))
dt[, `:=` (day = day(date), value = rnorm(nrow(dt)))]

# benchmark
> system.time(dt[, v1 := roll_mean(value, 30, fill = NA, align = "right") * NA^(day != 15)])
   user  system elapsed 
  0.011   0.000   0.011 
> system.time(dt[, v2 := ifelse(day == 15, roll_mean(value, 30, fill = NA, align = "right"), NA)])
   user  system elapsed 
  0.034   0.005   0.039 
> system.time(dt[day == 15, v3 := dt[.SD[, .(d_dn = date - 30, d_up = date)], on=.(date > d_dn, date <= d_up), mean(value), by=.EACHI]$V1])
   user  system elapsed 
  0.043   0.001   0.044 

Warning: the non-equi join approach will give also a value for the first row where day == 15


Used data:

set.seed(2018)
dt <- data.table(date = seq.Date(as.Date("2017-01-01"), as.Date("2017-12-31"), by = "1 day"))
dt[, `:=` (day = day(date), value = rnorm(nrow(dt)))]
Jaap
  • 81,064
  • 34
  • 182
  • 193
  • Does this actually compute all of them and then set all the day != 15 to NA? It looks like it but I have never seen this NA^ usage before. I am trying to avoid doing unnecessary computations. – fsumathguy Jul 13 '18 at 14:03
  • @fsumathguy That's true, added another option. – Jaap Jul 13 '18 at 14:10
  • This is perfect. Thank you. So simple in hindsight. – fsumathguy Jul 13 '18 at 14:15
  • @fsumathguy Although it might feel contradictory, the first option is about 3x faster. – Jaap Jul 13 '18 at 14:18
  • That is extremely counterintuitive. How could an if statement be so burdensome? – fsumathguy Jul 13 '18 at 15:02
  • 1
    Both options compute them all, right? (I guess you know https://stackoverflow.com/questions/16275149/does-ifelse-really-calculate-both-of-its-vectors-every-time-is-it-slow ) I think you should consider a non-equi join, perhaps: `dt[day == 15, v := dt[.SD[, .(d_dn = date - 30, d_up = date)], on=.(date > d_dn, date <= d_up), mean(value), by=.EACHI]$V1]` Looking at `dt[c(46, 74)]` I see it matches your values, but haven't benchmarked or anything. – Frank Jul 13 '18 at 17:20
  • 1
    @fsumathguy An interesting discussion, mostly on `NaN^0`, but also on `NA^0`, can be found here: [Why does NaN^0 == 1](https://stackoverflow.com/questions/17863619/why-does-nan0-1) – Henrik Jul 13 '18 at 21:42
0

In dplyr you can use case_when to check if the date is equal to 15 then take the rolling mean.

library(dplyr)
library(RcppRoll)
dt %>% mutate(roll_sum = case_when(day == 15 ~ roll_mean(value, 30, align = "right", fill = NA)))
jasbner
  • 2,253
  • 12
  • 24