2

I have a data frame (da) where each row has a timestamp in ascending order (intervals between each timestamp is random).

I wanted to keep rows of da based on whether its time fell within the times found between those in two other vectors (first.times and second.times). So I'd go down the vectors of first.time and second.time iteratively and see if da has times within those intervals (min = first times and max = second.times), with which I keep, and the rest I don't.

The only way I've figured out how to do it is with a for loop, but it can take a while. Here's the code with some example data:

#Set start and end dates
date1 <- as.POSIXct(strptime('1970-01-01 00:00', format = '%Y-%m-%d %H:%M'))
date2 <- as.POSIXct(strptime('1970-01-05 23:00', format = '%Y-%m-%d %H:%M'))

#Interpolate 250000 dates in between (dates are set to random intervals)
dates <- c(date1 + cumsum(c(0, round(runif(250000, 20, 200)))), date2)

#Set up dataframe
da <- data.frame(dates = dates,
                 a = round(runif(1, 1, 10)),
                 b = rep(c('Hi', 'There', 'Everyone'), length.out = length(dates)))
head(da); dim(da)

#Set up vectors of time
first.times <- seq(date1,      #First time in sequence is date1
                   date2,      #Last time in sequence is date2
                   by = 13*60) #Interval of 13 minutes between each time (13 min * 60 sec)

second.times <- first.times + 5*60 #Second time is 5 min * 60 seconds later
head(first.times); length(first.times)
head(second.times); length(second.times)

#Loop to obtain rows
subsetted.dates <- da[0,]
system.time(for(i in 1:length(first.times)){
  subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
})
 user  system elapsed 
2.590   0.825   3.520 

I was wondering if there is a more efficient and faster way of doing what I did in the for loop. It goes pretty fast with this example dataset, but my actual dataset can take 45 seconds for each iteration, and with 1000 iterations to make, this can take a while!

Any help will go a long way!

Thanks!

Lalochezia
  • 497
  • 4
  • 15

2 Answers2

1

Never use rbind or cbind within a loop! This leads to excessive copying in memory. See Patrick Burns' R Interno: Circle 2 - Growing Objects. Instead, build a list of data frames to rbind once outside the loop:

Since you iterate element wise between equal length vectors, consider mapply or its list wrapper, Map:

df_list <- Map(function(f, s) da[da$dates >= f & da$dates < s,],
               first.times, second.times)

# EQUIVALENT CALL
df_list <- mapply(function(f, s) da[da$dates >= f & da$dates < s,],
                  first.times, second.times, SIMPLIFY=FALSE)

Even consider adding first and second times into data frame with transform to add columns:

df_list <- Map(function(f, s) transform(da[da$dates >= f & da$dates < s,], 
                                        first_time = f, second_time = s),
               first.times, second.times)

From there, use a host of solutions to row bind list of data frames:

# BASE
final_df <- do.call(rbind, df_list)

# PLYR
final_df <- rbind.fill(df_list)

# DPLYR
final_df <- bind_rows(df_list)

# DATA TABLE
final_df <- rbindlist(df_list)

Check benchmark examples here: Convert a list of data frames into one data frame

Parfait
  • 104,375
  • 17
  • 94
  • 125
  • Thanks Parfait, that's exactly what I needed. I was first introduced to forloops and I'm stuck in a loop (hah!) I can't seem to break out of. Did it take you a while to figure out how to use the applied() functions properly/dynamically? Thanks for the book as well. – Lalochezia Nov 15 '18 at 17:00
  • First, there is nothing wrong with `for` loops. You could still use the method to build a list of data frames (but `rbind` once outside loop). Second, the [apply family *are* loops](https://stackoverflow.com/questions/28983292/is-the-apply-family-really-not-vectorized) but more compact versions that return objects. – Parfait Nov 15 '18 at 19:04
  • And yes, indeed, the apply family did take some time to grasp but taught me too the elegance of the R language and R's object model of the vector: no scalars in R (only a vector of one element); matrix (vector with dim attribute); data frames (list of equal length vectors), etc. – Parfait Nov 15 '18 at 19:08
0

Comparing to the original setup ...

> subsetted.dates <- da[0,]
> system.time(for(i in 1:length(first.times)){
+   subsetted.dates <- rbind(subsetted.dates, da[da$dates >= first.times[i] & da$dates < second.times[i],])
+ })
   user  system elapsed 
   3.97    0.35    4.33 

... it is possible to get a slight performance improvement using lapply:

> system.time({
+   subsetted.dates <- lapply(1:length(first.times),function(i) da[da$dates >= first.times[i] & da$dates < second.times[i],])
+   subsetted.dates <- do.call(rbind,subsetted.dates)
+ })
   user  system elapsed 
   3.37    0.26    3.75 

Changing a bit the algorithm, if you first create index of dates with a bit smaller set of data and then apply it, that leads to even a better performance:

> system.time({
+   da_dates <- da$dates
+   da_inds <- lapply(1:length(first.times),function(i) which(da_dates >= first.times[i] & da_dates < second.times[i]))
+   subsetted.dates <- da[unlist(da_inds),]
+ })
   user  system elapsed 
   2.60    0.31    2.94 

Suggesting that that the time intervals can be ordered in time order (in this case they were already in time order) and that they are not overlapping, the problem becomes even faster:

system.time({ 
  da_date_order <- order(da$dates)
  da_date_back_order <- order(da$dates)
  da_sorted_dates <- sort(da$dates)
  da_selected_dates <- rep(FALSE,length(da_sorted_dates))
  j = 1
  for (i in 1:length(da_dates)) {
    if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
      da_selected_dates[i] <- TRUE
    } else if (da_sorted_dates[i] >= second.times[j]) {
      j = j + 1
      if (j > length(second.times)) {
        break
      }
    }
  }
  subsetted.dates <- da[da_date_back_order[da_selected_dates],]
})

user  system elapsed 
0.98    0.00    1.01 

And if you allow sorting the original da dataset, then the solution is even faster:

system.time({
  da <- da[order(da$dates),]
  da_sorted_dates <- da$dates
  da_selected_dates <- rep(FALSE,length(da_sorted_dates))
  j = 1
  for (i in 1:length(da_dates)) {
    if (da_sorted_dates[i] >= first.times[j] & da_sorted_dates[i] < second.times[j]) {
      da_selected_dates[i] <- TRUE
    } else if (da_sorted_dates[i] >= second.times[j]) {
      j = j + 1
      if (j > length(second.times)) {
        break
      }
    }
  }
  subsetted.dates <- da[da_selected_dates,]
})

user  system elapsed 
0.63    0.00    0.63 
Heikki
  • 2,214
  • 19
  • 34