7

I have a very large training set (~2Gb) in a CSV file. The file is too large to read directly into memory (read.csv() brings the computer to a halt) and I would like to reduce the size of the data file using PCA. The problem is that (as far as I can tell) I need to read the file into memory in order to run a PCA algorithm (e.g., princomp()).

I have tried the bigmemory package to read the file in as a big.matrix, but princomp doesn't function on big.matrix objects and it doesn't seem like big.matrix can be converted into something like a data.frame.

Is there a way of running princomp on a large data file that I'm missing?

I'm a relative novice at R, so some of this may be obvious to more seasoned users (apologies in avance).

Thanks for any info.

Paul Hiemstra
  • 59,984
  • 12
  • 142
  • 149
user141146
  • 3,285
  • 7
  • 38
  • 54
  • Basically you need to do PCA without estimating the sample covariance matrix. There is a large literature on high-dimensional PCA, particularly with applications to image processing and financial markets. However, it's more than likely not something trivial to do. – John Sep 15 '12 at 02:03
  • 2
    How many observations and how many variables does the file contain? – rolando2 Sep 15 '12 at 04:51
  • @rolando2 It contains about 50K rows and ~10000 columns – user141146 Sep 15 '12 at 12:52
  • It should fit in the memory (provided that you have a reasonably capable computer -- by this I mean >=4GB RAM ob board) -- check if you are not reading it as a string array (i.e. cut first 100 lines as a separate file and check if you can import this directly as numbers). – mbq Oct 01 '12 at 09:50
  • please make it more clear if your problem is loading the data in to R or only looking for an efficient PCA algorithem for high dimentional data. – Areza Oct 01 '12 at 10:01
  • Doing PCA on very large data set in R http://stackoverflow.com/a/29195752/2333498 – Andres Kull Mar 22 '15 at 14:51
  • You can read big datasets fastly using `data.table` package, with the function `fread`. – igorkf Jan 30 '20 at 01:08

2 Answers2

10

The way I solved it was by calculating the sample covariance matrix iteratively. In this way you only need a subset of the data for any point in time. Reading in just a subset of the data can be done using readLines where you open a connection to the file and read iteratively. The algorithm looks something like (it is a two-step algorithm):

Calculate the mean values per column (assuming that are the variables)

  1. Open file connection (con = open(...))
  2. Read 1000 lines (readLines(con, n = 1000))
  3. Calculate sums of squares per column
  4. Add those sums of squares to a variable (sos_column = sos_column + new_sos)
  5. Repeat 2-4 until end of file.
  6. Divide by number of rows minus 1 to get the mean.

Calculate the covariance matrix:

  1. Open file connection (con = open(...))
  2. Read 1000 lines (readLines(con, n = 1000))
  3. Calculate all cross-products using crossprod
  4. Save those crossproducts in a variable
  5. Repeat 2-4 until end of file.
  6. divide by the number of rows minus 1 to get the covariance.

When you have the covariance matrix, just call princomp with covmat = your_covmat and princomp will skip calulating the covariance matrix himself.

In this way the datasets you can process are much, much larger than your available RAM. During the iterations, the memory usage is roughly the memory the chunk takes (e.g. 1000 rows), after that the memory usage is limited to the covariance matrix (nvar * nvar doubles).

Paul Hiemstra
  • 59,984
  • 12
  • 142
  • 149
-2

Things to keep in mind when importing a large dataset.

  1. Memory requirement.

  2. Understand the structure of dataset being imported use the following sample code:

    initial <- read.table("datatable.csv", nrows = 100);

    classes <- sapply(initial, class);

    tabAll <- read.table("datatable.csv", colClasses = classes)

  3. If dataset is large use fread() function from data,table class.

  4. Perform Dimensionality reduction technique before applying PCA. Example, remove highly correlated variables or nearZeroVariance variables as they dont contribute to the output.

  5. Then apply PCA.

I hope it helps