0

I have a timeseries with a 5 minute resolution and I would like to aggregate (average) all values within 30 minutes intervals for each category (X/Y/Z) variable.

I have found many threads here on SO but none of them aggregates half-hourly and I don't have an idea how to combine the half-hourly aggregation with a second column. Furthermore, I would appreciate it if there would be a data.table solution, because of the superior performance. The original tables are several million rows long with 1000s of categories.

My data looks like this:

+---------------------+------+------------+
|      Timestamp      | DUID | Meter_Prod |
+---------------------+------+------------+
| 2018-03-01 00:00:00 | X    |          1 |
| 2018-03-01 00:00:00 | Y    |          2 |
| 2018-03-01 00:00:00 | Z    |          3 |
| 2018-03-01 00:05:00 | X    |          1 |
| 2018-03-01 00:05:00 | Y    |          2 |
| 2018-03-01 00:05:00 | Z    |          3 |
| ...                 |      |            |
| 2018-03-01 00:55:00 | X    |          1 |
| 2018-03-01 00:55:00 | Y    |          2 |
| 2018-03-01 00:55:00 | Z    |          3 |
+---------------------+------+------------+

I would like to have this

+---------------------+------+--------------------+
|      Timestamp      | DUID | Meter_Prod_Average |
+---------------------+------+--------------------+
| 2018-03-01 00:00:00 | X    |                  1 |
| 2018-03-01 00:00:00 | Y    |                  2 |
| 2018-03-01 00:00:00 | Z    |                  3 |
| 2018-03-01 00:30:00 | X    |                  1 |
| 2018-03-01 00:30:00 | Y    |                  2 |
| 2018-03-01 00:30:00 | Z    |                  3 |
+---------------------+------+--------------------+

An example data frame is uploaded here: https://pastebin.com/4bESGTKH

UDE_Student
  • 349
  • 1
  • 3
  • 19
  • There are some posts related to that https://stackoverflow.com/questions/39987875/r-aggregate-by-date-every-30min-mean , https://stackoverflow.com/questions/40252525/how-to-aggregate-every-30-minutes-in-r and https://stackoverflow.com/questions/27594959/grouping-every-n-minutes-with-dplyr – Ronak Shah Mar 12 '19 at 10:11

1 Answers1

1

You could try a data.table rolling join. Create a new data.table with the time intervals you want:

head(dt)
# Timestamp DUID Meter_Prod
# 1 2018-03-01 00:00:00    X         15
# 2 2018-03-01 00:00:00    Y        122
# 3 2018-03-01 00:00:00    Z          6
# 4 2018-03-01 00:05:00    X         15
# 5 2018-03-01 00:05:00    Y        122
# 6 2018-03-01 00:05:00    Z          6
start_time <- as.POSIXct('2018-03-01 00:00:00')
new_time <- data.table(Timestamp=seq.POSIXt(start_time,max(dt$Timestamp),by=30*60)) # Will be deleted during join
new_time[,ts:=Timestamp] # Will be preserved for grouping

The new table should have a dummy column with the same timestamps for the join.

new_dt <- new_time[dt,on='Timestamp',roll=+Inf] #Join statement
new_dt[,.(Meter_Prod_Average=mean(Meter_Prod)),by=.(ts,DUID)] # Aggregation
#                      ts DUID Meter_Prod_Average
# 1: 2018-03-01 00:00:00    X                 15
# 2: 2018-03-01 00:00:00    Y                122
# 3: 2018-03-01 00:00:00    Z                  6
# 4: 2018-03-01 00:30:00    X                 15
# 5: 2018-03-01 00:30:00    Y                122
# ---                                            
#   140: 2018-03-01 23:00:00    Y                122
# 141: 2018-03-01 23:00:00    Z                  6
# 142: 2018-03-01 23:30:00    X               2696
# 143: 2018-03-01 23:30:00    Y                122
# 144: 2018-03-01 23:30:00    Z                  6

Based on the roll argument in the join you can vary the position of the timestamp wrt the interval.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Rohit
  • 1,967
  • 1
  • 12
  • 15
  • Hm interestingly, if I convert my dump to a data.table only the first 15 rows are in the data.table. Sorry this is a bit offtopic. – UDE_Student Mar 12 '19 at 10:54
  • @UDE_Student Could you `unlist()` `DUID` before converting to data.table, unless there's a good reason to keep a list-column of character vectors instead of a single character vector? – Aurèle Mar 12 '19 at 11:17
  • @Aurèle Yes you are right, I unlisted, nevertheless the data table is still 11 rows long. – UDE_Student Mar 12 '19 at 11:34