6

I need to delete rows containing NA, but only if they are leading(trailing) i.e. before(after) any data appears for a variable. This is very similar to: How to find (not replace) leading NAs, gaps, and final NAs in data.table columns by category and: How delete leading and trailing rows by condition in R?

But, I need to do this process grouped by the variable "ID". I am going to impute the data of the NAs in between in a later step.

The same should apply to the trailing NAs.

The initial data.frame looks like this:

df1<-data.frame(ID=(rep(c("C1001","C1008","C1009","C1012"),each=17)),
  Year=(rep(c(1996:2012),4)),x1=(floor(runif(68,20,75))),x2= 
  (floor(runif(68,1,100))))

#Introduce leading / tailing NAs

df1[1:5,3]<-NA
df1[18:23,4]<-NA
df1[35:42,4]<-NA
df1[49:51,3]<-NA
df1[66:68,3]<-NA


#introduce "gap"- NAs
set.seed(123)
df1$x1[rbinom(68,1,0.1)==1]<-NA
df1$x2[rbinom(68,1,0.1)==1]<-NA

The output is quite long. This is to make a proper distinction between "gap"-NAs and "leading/trailing" NAs possible

head(df1,10)

      ID Year x1 x2
1  C1001 1996 NA 40
2  C1001 1997 NA 88
3  C1001 1998 NA 37
4  C1001 1999 NA 29
5  C1001 2000 NA 17
6  C1001 2001 42 18
7  C1001 2002 20 48
8  C1001 2003 30 26
9  C1001 2004 66 22
10 C1001 2005 32 67

The output should get rid of the leading NAs (see row 1:5 above) by group of ID. Or rows 18:23 in the example below:

df1[18:28,]

      ID Year x1 x2
18 C1008 1996 33 NA
19 C1008 1997 26 NA
20 C1008 1998 NA NA
21 C1008 1999 51 NA
22 C1008 2000 31 NA
23 C1008 2001 44 NA
24 C1008 2002 NA 56
25 C1008 2003 47 70
26 C1008 2004 39 91
27 C1008 2005 55 62
28 C1008 2006 40 43

The final output should look like this (depending on the random NAs thrown in of course!):

      ID Year x1 x2
6  C1001 2001 42 18
7  C1001 2002 20 48
8  C1001 2003 30 26
9  C1001 2004 66 22
10 C1001 2005 32 67
11 C1001 2006 NA  5
12 C1001 2007 24 70
13 C1001 2008 33 35
14 C1001 2009 60 41
15 C1001 2010 66 82
16 C1001 2011 47 91
17 C1001 2012 41 28
24 C1008 2002 NA 56
25 C1008 2003 47 70
26 C1008 2004 39 91
27 C1008 2005 55 62
28 C1008 2006 40 43
29 C1008 2007 39 54
30 C1008 2008 49  6
31 C1008 2009 NA 26
32 C1008 2010 NA 40
33 C1008 2011 42 20
34 C1008 2012 34 83
44 C1009 2005 51 96
45 C1009 2006 66 96
46 C1009 2007 37 NA
47 C1009 2008 58 26
48 C1009 2009 34 22
52 C1012 1996 51 78
53 C1012 1997 70 17
54 C1012 1998 69 41
55 C1012 1999 35 47
56 C1012 2000 37 86
57 C1012 2001 74 92
58 C1012 2002 54 NA
59 C1012 2003 71 67
60 C1012 2004 45 95
61 C1012 2005 42 52
62 C1012 2006 56 58
63 C1012 2007 28 34
64 C1012 2008 51 35
65 C1012 2009 33  2

Thanks a bunch!

Juan
  • 171
  • 1
  • 12

3 Answers3

3

Here's a solution that relies on rleid to only remove the leading NAs:

library(data.table)
dt <- as.data.table(df1)

dt[,
   .SD[!(rleid(x1) %in% c(1, max(rleid(x1))) & is.na(x1)) &
         !(rleid(x2) %in% c(1, max(rleid(x2))) & is.na(x2))],
   by = ID
   ]

To do it with multiple columns automatically, assuming that they all start with x you can do this:

dt[dt[, Reduce('&',
               lapply(.SD, function(x) !(rleid(x) %in% c(1, max(rleid(x1))) & is.na(x)))),
      by = ID,
      .SDcols = grep('x', names(dt))]$V1
   ]
# or using .SD as before

dt[,
   .SD[Reduce('&', lapply(.SD, function(x) !(rleid(x) %in% c(1, max(rleid(x1))) & is.na(x)))),
       .SDcols = grep('x', names(dt))],
   by = ID
   ]

Or same idea with :

library(dplyr)
library(data.table)

df1%>%
  group_by(ID)%>%
  filter_at(vars(starts_with('x')), all_vars(!(is.na(.) & rleid(.) %in% c(1, max(rleid(.))))))

Results in 42 rows:

# A tibble: 42 x 4
# Groups:   ID [4]
   ID     Year    x1    x2
   <fct> <int> <dbl> <dbl>
 1 C1001  2001    25    54
 2 C1001  2002    28    50
 3 C1001  2003    35    94
 4 C1001  2004    52    34
 5 C1001  2005    60    47
 6 C1001  2006    NA     9
 7 C1001  2007    67    86
 8 C1001  2008    58    40
 9 C1001  2009    61    73
10 C1001  2010    28    18
# ... with 32 more rows
Cole
  • 11,130
  • 1
  • 9
  • 24
  • Thanks Cole! I prefer the solution without adding any extra columns as the original data set has about 50 of them. Which brings me to my follow-up question: Do you know a way to apply the above to all but the first two columns? I have a list of the column names maybe that would help? – Juan Oct 26 '19 at 12:13
  • 1
    @Juan see edit for a revised ```data.table``` solution. Note this provides 42 rows per your example. Using ```rowSums()``` causes an issue, specifically for row 24 since there is a string of NAs between ```x1``` and ```x2```. – Cole Oct 26 '19 at 12:57
  • @Juan can you clarify whether the expected output is 42 rows or 41 rows? The other answer provides 41 vs. 42 in this one. – Cole Oct 26 '19 at 13:41
  • Looked into it and I believe 42 is better. But I don't think there is a 100% correct answer. – Juan Oct 26 '19 at 13:55
3

Here is an approach that uses filter_at() that identifies leading NA values with cumsum() and trailing ones with the same idea but with the vector reversed.

library(dplyr)

df1 %>%
  group_by(ID) %>%
  filter_at(vars(-ID, -Year), all_vars(pmin(cumsum(!is.na(.)), rev(cumsum(!is.na(rev(.))))) != 0))

# A tibble: 42 x 4
# Groups:   ID [4]
   ID     Year    x1    x2
   <fct> <int> <dbl> <dbl>
 1 C1001  2001    42    18
 2 C1001  2002    20    48
 3 C1001  2003    30    26
 4 C1001  2004    66    22
 5 C1001  2005    32    67
 6 C1001  2006    NA     5
 7 C1001  2007    24    70
 8 C1001  2008    33    35
 9 C1001  2009    60    41
10 C1001  2010    66    82
# ... with 32 more rows
Ritchie Sacramento
  • 29,890
  • 4
  • 48
  • 56
3

Another option using :

f4 <- function(DT) {
    setindex(DT, ID)
    DT[, rn := .I]
    uid <- DT[,.(ID=unique(ID), V=TRUE)]
    rows <- rbindlist(lapply(cols, function(x) {
        merge(
            DT[, V := !is.na(get(x))][uid, on=c("ID", "V"), mult="first", .(ID, S=rn)],
            DT[uid, on=c("ID", "V"), mult="last", .(ID, E=rn)],
            by="ID")
    }))[, .(S=max(S), E=min(E)) , ID]
    DT[rows, on=.(ID, rn>=S, rn<=E), .SD]
}
f4(df1)

output:

       ID Year V1 V2 rn     V
 1: C1001 2001 45 70  6  TRUE
 2: C1001 2002 74 78  6  TRUE
 3: C1001 2003 48  9  6  TRUE
 4: C1001 2004 27 32  6  TRUE
 5: C1001 2005 39  3  6  TRUE
 6: C1001 2006 NA 89  6  TRUE
 7: C1001 2007 22  2  6  TRUE
 8: C1001 2008 56 12  6  TRUE
 9: C1001 2009 29 34  6  TRUE
10: C1001 2010 30 53  6  TRUE
11: C1001 2011 61 46  6  TRUE
12: C1001 2012 23 42  6  TRUE
13: C1008 2002 NA 95 24  TRUE
14: C1008 2003 71 64 24  TRUE
15: C1008 2004 41 92 24  TRUE
16: C1008 2005 45 28 24  TRUE
17: C1008 2006 74 59 24  TRUE
18: C1008 2007 45 16 24  TRUE
19: C1008 2008 57 64 24  TRUE
20: C1008 2009 NA 35 24  TRUE
21: C1008 2010 NA  2 24  TRUE
22: C1008 2011 32 27 24  TRUE
23: C1008 2012 69 41 24  TRUE
24: C1009 2005 30 24 44  TRUE
25: C1009 2006 43 49 44  TRUE
26: C1009 2007 50 NA 44 FALSE
27: C1009 2008 28 72 44  TRUE
28: C1009 2009 43 20 44  TRUE
29: C1012 1996 36 73 52  TRUE
30: C1012 1997 52  4 52  TRUE
31: C1012 1998 67 14 52  TRUE
32: C1012 1999 39 59 52  TRUE
33: C1012 2000 56 12 52  TRUE
34: C1012 2001 25 92 52  TRUE
35: C1012 2002 26 NA 52 FALSE
36: C1012 2003 73 11 52  TRUE
37: C1012 2004 39 50 52  TRUE
38: C1012 2005 65 89 52  TRUE
39: C1012 2006 70 21 52  TRUE
40: C1012 2007 54 86 52  TRUE
41: C1012 2008 37 70 52  TRUE
42: C1012 2009 66 22 52  TRUE
       ID Year V1 V2 rn     V

data:

library(data.table)
df1 <- data.frame(ID=(rep(c("C1001","C1008","C1009","C1012"),each=17)),
    Year=(rep(c(1996:2012),4)), V1=(floor(runif(68,20,75))), V2=
        (floor(runif(68,1,100))))
df1[1:5,3]<-NA
df1[18:23,4]<-NA
df1[35:42,4]<-NA
df1[49:51,3]<-NA
df1[66:68,3]<-NA
set.seed(123)
df1$V1[rbinom(68,1,0.1)==1]<-NA
df1$V2[rbinom(68,1,0.1)==1]<-NA
setDT(df1)[, rn := .I]
cols <- paste0("V", 1:5)

timing code for data with large number of rows and large number of groups:

set.seed(0L)
if ((BIGDATA <- TRUE)) {
    nr <- 1e7
    nc <- 5
    nid <- 1e5
    dat <- data.table(ID=sample(nid, nr, TRUE),
        as.data.table(matrix(sample(c(1, NA), nr*nc, TRUE), ncol=nc)),
        key="ID")
    cols <- paste0("V", 1:5)
} else {
    df1 <- data.frame(ID=(rep(c("C1001","C1008","C1009","C1012"),each=17)),
        Year=(rep(c(1996:2012),4)), V1=(floor(runif(68,20,75))), V2=
            (floor(runif(68,1,100))))
    df1[1:5,3]<-NA
    df1[18:23,4]<-NA
    df1[35:42,4]<-NA
    df1[49:51,3]<-NA
    df1[66:68,3]<-NA
    set.seed(123)
    df1$V1[rbinom(68,1,0.1)==1]<-NA
    df1$V2[rbinom(68,1,0.1)==1]<-NA
    dat <- setDT(df1)[, rn := .I]
    cols <- paste0("V", 1:2)
}

DT0 <- copy(dat)
DT1 <- copy(dat)
DT2 <- copy(dat)
DT3 <- copy(dat)
DT4 <- copy(dat)

f0 <- function(DT) {
    DT[DT[, Reduce('&',
        lapply(.SD, function(x) {
            r <- rleid(x)
            !(r %in% c(1, max(r)) & is.na(x))
        })),
        ID,
        .SDcols=cols]$V1]
}

f1 <- function(DT) {
    DT[, c("rn", "rid") := .(.I, rowid(ID))][.N:1L, rev_rid := rowid(ID)]

    for (x in cols) {
        idx <- DT[is.na(get(x)) & ID %in% DT[is.na(get(x)) & (rid==1L | rev_rid==1L), ID],
            if (rid[1L]==1L || rev_rid[.N]==1L) rn,
            cumsum(c(0L, diff(rn) > 1L))]$V1
        DT <- DT[!rn %in% idx]
    }

    DT
}

f2 <- function(DT) {
    DT[, c("rn", "rid") := .(.I, rowid(ID))][.N:1L, rev_rid := rowid(ID)]

    for (x in cols) {
        DT <- DT[!rn %in% DT[is.na(get(x)),
            if (rid[1L]==1L || rev_rid[.N]==1L) rn,
            cumsum(c(0L, diff(rn) > 1L))]$V1]
    }

    DT
}

f3 <- function(DT) {
    DT[, rn := .I]
    rows <- DT[, transpose(lapply(.SD, function(x) c(rn[match(TRUE, !is.na(x))],
            rev(rn)[match(TRUE, !is.na(rev(x)))]))),
        ID, .SDcols=cols][, .(S=max(V1), E=min(V2)) , ID]
    DT[rows, on=.(ID, rn>=S, rn<=E), .SD]
}

f4 <- function(DT) {
    setindex(DT, ID)
    DT[, rn := .I]
    uid <- DT[,.(ID=unique(ID), V=TRUE)]
    rows <- rbindlist(lapply(cols, function(x) {
        merge(
            DT[, V := !is.na(get(x))][uid, on=c("ID", "V"), mult="first", .(ID, S=rn)],
            DT[uid, on=c("ID", "V"), mult="last", .(ID, E=rn)],
            by="ID")
    }))[, .(S=max(S), E=min(E)) , ID]
    DT[rows, on=.(ID, rn>=S, rn<=E), .SD]
}

microbenchmark::microbenchmark(f0(DT0), f1(DT1), f2(DT2), f3(DT3), f4(DT4), times=3L)

timings:

Unit: seconds
    expr       min        lq      mean    median        uq       max neval
 f0(DT0)  8.874985  8.950951  8.993281  9.026917  9.052429  9.077942     3
 f1(DT1) 16.249656 16.337013 16.657910 16.424370 16.862038 17.299706     3
 f2(DT2) 18.225748 18.284212 18.391198 18.342676 18.473922 18.605169     3
 f3(DT3) 10.361079 10.612313 10.698897 10.863548 10.867806 10.872063     3
 f4(DT4)  3.106936  3.137846  3.173174  3.168755  3.206293  3.243830     3

another test with same number of rows but much smaller number of groups:

set.seed(0L)
nr <- 1e7
nc <- 5
nid <- 1e2
dat <- data.table(ID=sample(nid, nr, TRUE),
    as.data.table(matrix(sample(c(1, NA), nr*nc, TRUE), ncol=nc)),
    key="ID")
cols <- paste0("V", 1:5)
DT0 <- copy(dat)
DT3 <- copy(dat)
microbenchmark::microbenchmark(f0(DT0), f3(DT3), f4(DT4), times=3L)

timings:

Unit: seconds
    expr      min       lq     mean   median       uq      max neval
 f0(DT0) 2.317905 2.331944 2.358256 2.345983 2.378431 2.410880     3
 f3(DT3) 2.108385 2.123889 2.132315 2.139392 2.144280 2.149168     3
 f4(DT4) 2.050805 2.079687 2.101211 2.108569 2.126414 2.144258     3
chinsoon12
  • 25,005
  • 4
  • 25
  • 35
  • 1
    @Cole I have no idea. Just find non equi join more readable than foverlaps – chinsoon12 Nov 04 '19 at 11:20
  • 1
    I agree and typically stay away from ```foverlaps()``` for the same reason. I looked into it and do not believe that ```foverlaps()``` can be used. – Cole Nov 04 '19 at 11:31