0

I'm working with huge data files (several hundred MBs) and need to be as efficient as possible. I'm using a lapply function to load all files into a list, but due to the nature of the file origin there are a couple columns that I don't need.

dfs <- list.files(pattern="*.txt")
dfss <- lapply(dfs,read.table)

I normally use a drop=c("ID","num") command with read.table:

file <- read.table(drop=c("ID","num"))

But it won't work here. Any suggestions?

Jaap
  • 81,064
  • 34
  • 182
  • 193
  • Are your sure you normally run that command since [read.table()](http://www.inside-r.org/r-doc/utils/read.table) does not have a `drop` argument? And where's the file, separator? Plus, you are missing a closing parenthesis. – Parfait Feb 05 '16 at 22:25
  • I think you are looking for the [`fread`](http://www.rdocumentation.org/packages/data.table/functions/fread) function from the *data.table* package. – Jaap Feb 07 '16 at 09:13

1 Answers1

0

What about :

dfss <- lapply(dfs,read.table,drop=c("ID","num"))
HubertL
  • 19,246
  • 3
  • 32
  • 51