8

was trying to figure a way to use dplyr to count the number of occurrences for each id at each time 1 hour ahead. Tried using a for loop but it doesn't give me the desired result. Went through stack and tried looking for various methods but to no avail. Any advise or help is greatly appreciated. Thanks

Dataset: https://drive.google.com/file/d/1U186SeBWYyTnJVgUPmow7yknr6K9vu8i/view?usp=sharing

  id           date_time count
1  1 2019-12-27 00:00:00    NA
2  2 2019-12-27 00:00:00    NA
3  2 2019-12-27 00:55:00    NA
4  2 2019-12-27 01:00:00    NA
5  2 2019-12-28 01:00:00    NA
6  3 2019-12-27 22:00:00    NA
7  3 2019-12-27 22:31:00    NA
8  3 2019-12-28 14:32:00    NA

Desired Output

  id           date_time count
1  1 2019-12-27 00:00:00    1     #Count = 1 since there is no other cases 1 hour ahead but itself, only 1 case of id=1 
2  2 2019-12-27 00:00:00    3     #Count = 3 as there are 3 cases from 00:00 to 01:00 on 27/12
3  2 2019-12-27 00:55:00    2     #Count = 2 as there are 2 cases from 00:55 to 01:55 on 27/12
4  2 2019-12-27 01:00:00    1     #Count = 1 as only itself from 01:00 to 02:00 on 27/12
5  2 2019-12-28 01:00:00    1     #Count = 1 as only itself from 01:00 to 02:00 on 28/12
6  3 2019-12-27 22:00:00    2
7  3 2019-12-27 22:31:00    1
8  3 2019-12-28 14:32:00    1

My codes (I'm stuck):

library(tidyverse)

data <- read.csv('test.csv')
data$date_time <- as.POSIXct(data$date_time)
data$count <- NA

data %>% 
  group_by(id) %>%
  arrange(date_time, .by_group=TRUE)

#Doesn't give the desired output
for (i in 1:nrow(data)){
  data$count[i] <- nrow(data[data$date_time<=data$date_time[i]+1*60*60 & data$date_time>=data$date_time[i],])
}

6 Answers6

4

If OP is only looking for tidyverse solution. I am happy to delete this.

Here is an approach using data.table non-equi join:

DT[, onehrlater := date_time + 60*60] 
DT[, count :=
  DT[DT, on=.(id, date_time>=date_time, date_time<=onehrlater),
    by=.EACHI, .N]$N
]

How to read this:

1) DT[, onehrlater := date_time + 60*60] creates a new column of POSIX date time that is one hour later. := updates the original dataset by reference.

2) DT[DT, on=.(id, date_time>=date_time, date_time<=onehrlater) performs a self non-equi join such that all rows with i) the same id, ii) date_time after this row's date_time and iii) date_time before this row's date_time one hour later are joined to this row.

3) by=.EACHI, .N returns the count for each of those rows. And $N accesses the output of this self non-equi join. And DT[, count := ...] updates the original dataset by reference.

output:

   id           date_time          onehrlater count
1:  1 2019-12-27 00:00:00 2019-12-27 01:00:00     1
2:  2 2019-12-27 00:00:00 2019-12-27 01:00:00     3
3:  2 2019-12-27 00:55:00 2019-12-27 01:55:00     2
4:  2 2019-12-27 01:00:00 2019-12-27 02:00:00     1
5:  2 2019-12-28 01:00:00 2019-12-28 02:00:00     1
6:  3 2019-12-27 22:00:00 2019-12-27 23:00:00     2
7:  3 2019-12-27 22:31:00 2019-12-27 23:31:00     1
8:  3 2019-12-28 14:32:00 2019-12-28 15:32:00     1

data:

library(data.table)
DT <- fread("id           date_time 
1 2019-12-27T00:00:00
2 2019-12-27T00:00:00
2 2019-12-27T00:55:00
2 2019-12-27T01:00:00
2 2019-12-28T01:00:00
3 2019-12-27T22:00:00
3 2019-12-27T22:31:00
3 2019-12-28T14:32:00")
DT[, date_time := as.POSIXct(date_time, format="%Y-%m-%dT%T")]
chinsoon12
  • 25,005
  • 4
  • 25
  • 35
3

The question can be solved using a non-equi self join (in data.table speak). Unfortunately, this is not yet available with dplyr, AFAIK.

Here is an implementation using SQL:

library(sqldf)
sqldf("
select d1.id, d1.date_time, count(d2.date_time) as count 
  from dat as d1, dat as d2
  where d1.id = d2.id and d1.date_time between d2.date_time and (d2.date_time + 60*60)
  group by d2.id, d2.date_time")
  id           date_time count
1  1 2019-12-27 00:00:00     1
2  2 2019-12-27 00:00:00     3
3  2 2019-12-27 00:55:00     2
4  2 2019-12-27 01:00:00     1
5  2 2019-12-28 01:00:00     1
6  3 2019-12-27 22:00:00     2
7  3 2019-12-27 22:31:00     1
8  3 2019-12-28 14:32:00     1

Data

# reading directly from google drive, see https://stackoverflow.com/a/33142446/3817004
dat <- data.table::fread(
  "https://drive.google.com/uc?id=1U186SeBWYyTnJVgUPmow7yknr6K9vu8i&export=download")[
    , date_time := anytime::anytime(date_time)]
Community
  • 1
  • 1
Uwe
  • 41,420
  • 11
  • 90
  • 134
  • Normally one writes such queries as a self left join with complex on clause: `sqldf("select a.id, a.date_time, count(*) as count from dat a left join dat b on a.id = b.id and b.date_time between a.date_time and a.date_time + 60*60 group by 1, 2")` – G. Grothendieck Jun 14 '20 at 12:17
1

Maybe fuzzyjoin might be helpful here. You can create time ranges for each row of data (setting the end_time to 3600 seconds or 1 hour after each time). Then, you can do a fuzzy join with itself, where the date_time falls between this range to be counted as within the hour.

library(tidyverse)
library(fuzzyjoin)

df %>%
  mutate(row_id = row_number(),
         end_time = date_time + 3600) %>%
  fuzzy_inner_join(df, 
                  by = c("id", "date_time" = "date_time", "end_time" = "date_time"), 
                  match_fun = list(`==`, `<=`, `>=`)) %>%
  group_by(row_id) %>%
  summarise(id = first(id.x),
            date_time = first(date_time.x),
            count = n())

Output

# A tibble: 8 x 4
  row_id    id date_time           count
   <int> <int> <dttm>              <int>
1      1     1 2019-12-27 00:00:00     1
2      2     2 2019-12-27 00:00:00     3
3      3     2 2019-12-27 00:55:00     2
4      4     2 2019-12-27 01:00:00     1
5      5     2 2019-12-28 01:00:00     1
6      6     3 2019-12-27 22:00:00     2
7      7     3 2019-12-27 22:31:00     1
8      8     3 2019-12-28 14:32:00     1
Ben
  • 28,684
  • 5
  • 23
  • 45
1

I'd probably just write a little helper function here along with the split-lapply-bind method rather than group_by:

f <- function(x)
{
  sapply(1:nrow(x), function(i) {
    y <- as.numeric(difftime(x$date_time, x$date_time[i], units = "min"))
    sum(y >= 0 & y <= 60)
  })
}

df %>% mutate(count = do.call(c, df %>% split(df$id) %>% lapply(f)))
#>   id           date_time count
#> 1  1 2019-12-27 00:00:00     1
#> 2  2 2019-12-27 00:00:00     3
#> 3  2 2019-12-27 00:55:00     2
#> 4  2 2019-12-27 01:00:00     1
#> 5  2 2019-12-28 01:00:00     1
#> 6  3 2019-12-27 22:00:00     2
#> 7  3 2019-12-27 22:31:00     1
#> 8  3 2019-12-28 14:32:00     1
Allan Cameron
  • 147,086
  • 7
  • 49
  • 87
0

I've splited data by id and then for each row I've calculated how many date times that come after selected row are in range of 1 hour:

my_data <- tribble(
  ~id,   ~date_time, 
  1, "2019-12-27 00:00:00",
  2, "2019-12-27 00:00:00",    
  2, "2019-12-27 00:55:00",    
  2, "2019-12-27 01:00:00",   
  2, "2019-12-28 01:00:00",    
  3, "2019-12-27 22:00:10",    
  3, "2019-12-27 22:31:00",    
  3, "2019-12-28 14:32:00"    
)

my_data <- my_data %>%
  mutate(
    date_time = lubridate::ymd_hms(date_time)
  ) %>%
  split(.$id) %>%
  map(~.x %>% mutate(diff = c(0, diff(date_time)) / 60))

counts <- my_data %>%
  map(function(id_data) 
    map_dbl(seq_len(nrow(id_data)),
        ~{
          start_diff <- id_data %>% 
            slice(.x) %>%
            pluck("diff")

          id_data[.x:nrow(id_data),] %>%
            filter(diff - start_diff < 1) %>%
            nrow()
        }
    )
  )

my_data <- my_data %>%
  map2(counts, ~.x %>% mutate(counts = .y)) %>%
  bind_rows() %>%
  select(-diff)
det
  • 5,013
  • 1
  • 8
  • 16
0

You just need to tweak the logic of your loop:

res <- data.frame() # empty df for results

for(i in unique(data$id)){
  tmp      <- data[data$id == i,]  # logic is on the Id level

  for(r in 1:nrow(tmp)){
    tmp          <- tmp[ifelse(tmp$date_time <= tmp$date_time[1]+3600,T,F),] # logical test based on 1 hour window
    tmp$count[1] <- nrow(tmp)       # count
    tmp          <- tmp[1,]         # result is on the row level
    res          <- rbind(res, tmp) # populate results
  }
}

this yields:

> res
  id           date_time count
1  1 2019-12-27 00:00:00     1
2  2 2019-12-27 00:00:00     3
3  2 2019-12-27 00:00:00     1
4  2 2019-12-27 00:00:00     1
5  2 2019-12-27 00:00:00     1
6  3 2019-12-27 22:00:00     2
7  3 2019-12-27 22:00:00     1
8  3 2019-12-27 22:00:00     1
Hack-R
  • 22,422
  • 14
  • 75
  • 131