I have a data frame (da
) where each row has a timestamp in ascending order (intervals between each timestamp is random).
I wanted to keep rows of da
based on whether its time fell within the times found between those in two other vectors (first.times
and second.times
). So I'd go down the vectors of first.time
and second.time
iteratively and see if da
has times within those intervals (min = first times
and max = second.times
), with which I keep, and the rest I don't.
The only way I've figured out how to do it is with a for
loop, but it can take a while. Here's the code with some example data:
#Set start and end dates
date1 <- as.POSIXct(strptime('1970-01-01 00:00', format = '%Y-%m-%d %H:%M'))
date2 <- as.POSIXct(strptime('1970-01-05 23:00', format = '%Y-%m-%d %H:%M'))
#Interpolate 250000 dates in between (dates are set to random intervals)
dates <- c(date1 + cumsum(c(0, round(runif(250000, 20, 200)))), date2)
#Set up dataframe
da <- data.frame(dates = dates,
a = round(runif(1, 1, 10)),
b = rep(c('Hi', 'There', 'Everyone'), length.out = length(dates)))
head(da); dim(da)
#Set up vectors of time
first.times <- seq(date1, #First time in sequence is date1
date2, #Last time in sequence is date2
by = 13*60) #Interval of 13 minutes between each time (13 min * 60 sec)
second.times <- first.times + 5*60 #Second time is 5 min * 60 seconds later
head(first.times); length(first.times)
head(second.times); length(second.times)
#Loop to obtain rows
subsetted.dates <- da[0,]
system.time(for(i in 1:length(first.times)){
subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
})
user system elapsed
2.590 0.825 3.520
I was wondering if there is a more efficient and faster way of doing what I did in the for
loop. It goes pretty fast with this example dataset, but my actual dataset can take 45 seconds for each iteration, and with 1000 iterations to make, this can take a while!
Any help will go a long way!
Thanks!