3

I have a data frame with on the order of 20 numeric columns, each containing significant amounts of NA values. I would like to select a subset of these columns that will give me the most rows containing zero NA values. An exhaustive search would take a lot of computing time--is there a better way to get an approximation?

Here is an example with a smaller data frame (completely arbitrary):

set.seed(2)
foo = as.data.frame(matrix(rnorm(200), nr = 20))
foo[sapply(foo, function(x) x > abs(x[1]))] = NA
foo = foo[-1, ]

round(foo, 3)

       V1     V2     V3     V4     V5     V6     V7     V8     V9    V10
2   0.185 -1.200 -1.959     NA -1.696  0.261  0.139  0.410 -0.638 -1.262
3      NA  1.590 -0.842 -0.703 -0.533 -0.314     NA -0.807 -0.268  0.392
4  -1.130  1.955     NA  0.158 -1.372 -0.750 -0.431  0.086  0.360 -1.131
5  -0.080  0.005     NA  0.506 -2.208 -0.862 -1.044     NA -1.313  0.544
6   0.132 -2.452     NA -0.820     NA     NA  0.538 -0.654 -0.884     NA
7   0.708  0.477 -0.305 -1.999 -0.653  0.940 -0.670     NA     NA  0.025
8  -0.240 -0.597 -0.091 -0.479 -0.285     NA  0.639  0.550 -2.099  0.515
9      NA  0.792 -0.184  0.084 -0.387 -0.421 -1.724 -0.807 -1.239 -0.654
10 -0.139  0.290 -1.199 -0.895  0.387 -0.351 -1.742 -0.997     NA  0.504
11  0.418  0.739 -0.838 -0.921     NA -1.027  0.690     NA     NA -1.272
12     NA  0.319     NA  0.330     NA -0.251  0.331 -0.169     NA -0.077
13 -0.393  1.076 -0.562 -0.142 -1.184  0.472  0.871     NA  0.057 -1.345
14 -1.040 -0.284     NA  0.435 -1.358     NA -2.016 -0.844  0.324 -0.266
15     NA -0.777 -1.048 -0.054 -1.513  0.564  1.213     NA -0.905     NA
16 -2.311 -0.596 -1.966 -0.907 -1.253  0.456  1.200 -1.343 -0.652  0.701
17  0.879 -1.726 -0.323  1.304     NA     NA  1.032     NA -0.262 -0.443
18  0.036 -0.903     NA  0.772  0.008     NA  0.786  0.464 -0.935 -0.789
19     NA -0.559     NA  1.053 -0.843  0.107     NA  0.268     NA -0.857
20  0.432 -0.247     NA -1.410 -0.601 -0.783 -1.454     NA -1.624 -0.746

dim(na.omit(foo))
[1]  1 10

Here is how I've formulated an exhaustive search:

best.list = list()
for (i in 5:ncol(foo)) {
    # get best subset for each size
    collist = combn(ncol(foo), i)
    numobs = apply(collist, 2, function(x) nrow(na.omit(foo[, x])))
    cat("for subset size", i, "most complete obs is", max(numobs), "\n")
    best = which(numobs == max(numobs))[1]
    best.list = c(best.list, list(collist[, best]))
}

For example, best.list[[1]] tells me that if I keep 5 columns I can have 12 complete observations (rows with zero NAs), and that columns 1, 2, 4, 7, and 10 are the ones I should choose.

While this works for very small data frames, it quickly becomes prohibitive with larger ones. Is there a way in R to efficiently estimate the best subset of a given size? The only thing I've been able to find is the subselect package, though I can't figure out how to implement its methods for the problem at hand.

MarkH
  • 189
  • 1
  • 5
  • Do you actually need *the best* solution to your problem? If you can live with an approximation, a greedy approach might be helpful (it is at least easy). Choose columns one at a time, in each step picking one that yields the most complete cases together with the columns you already picked. – Stephan Kolassa Jun 03 '14 at 13:42
  • ... specifically, you have a discrete optimization problem. Define `X <- (is.na(foo))+0`. Then your problem is equivalent to finding a 0/1 vector `v` with `sum(v)==k` and `sum((X%*%v)[,1]==0)` maximal. It may be that the specific way to solve this in R is the least of your worries, R is not really strong in optimization - you may need to think about the algorithm first. The CRAN optimization task view may be helpful, though I didn't find anything right away: http://cran.r-project.org/web/views/Optimization.html – Stephan Kolassa Jun 03 '14 at 13:55
  • An approximation is fine. I don't know a ton about optimization, but I get the gist of greedy algorithms. I'll have a go at it. – MarkH Jun 03 '14 at 14:13

2 Answers2

3

Not sure if this is the complete solution, but if you want fast results, data.table and a shadow matrix are the most probable ingredients.

library(data.table)
df = data.table(foo) # your foo dataframe, converted to data.table

y = sort(df[,lapply(.SD, function(x) sum(is.na(x)))]) # nr of NA in columns, increasing
setcolorder(df, names(y)) # now the columns are ordered - less NA first

df[, idx := rowSums(is.na(df))] # count nr of NA in rows
df = df[order(idx),] # sort by nr of NA in rows
df[, idx := NULL] # idx not needed anymore
# now your data.table is sorted: columns with least NA to the left,  
# rows with with least NA on top

# shadow matrix
x= data.table(abs(!is.na(df)))  # 0 = NA value
y = as.data.table(t(x))
y = y[,lapply(.SD, cumprod)]
y = as.data.table(t(y))
y[,lapply(.SD, sum)] 

# nr of complete cases from column selections:
# V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
# 1: 19 18 16 14 11 10  7  5  2   1
Henk
  • 3,634
  • 5
  • 28
  • 54
  • This seems to be a pretty good approximation. I compared it to the exact solution for a few 15-column matrices and it was usually within 1 or 2 of the true answer. Is there any theory on how the error would scale with larger data frames? – MarkH Jun 03 '14 at 14:46
  • 1
    No idea what causes the small difference [other than possible double count of NA in the same rows?]. By the way, the NA sort of the rows [via the idx column] is not required. Comment them out and you have the same result. – Henk Jun 03 '14 at 15:29
0

Old post, but there's a built in function that does this. I'll bet it's quite efficient:

df_noNAs <- df[complete.cases(df[,1:20]),]
Axeman
  • 32,068
  • 8
  • 81
  • 94