2

The rest of the scenarios the algorithm works well apart from setting the shrinkage parameter.
In order to improve the accuracy of the algorithm I am using various techniques.
In the scikit document it says using shrinkage can improve accuracy.

I am using AT&T data sets provided by scikit-learn (fetch_olivetti_faces).

Is it because the number of features are large or the memory issue?

Currently I am using VM machine to the run the code.
Hardware Specifications of the machine are:

CPU: Core i5
Memory: 1 GB Ram
Storage: 20 GB SSD
OS: UBUNTU 14.4


This is the code I am running:

lda = LinearDiscriminantAnalysis(solver='lsqr',shrinkage='auto')

This is the error I am getting:

Vasili Syrakis
  • 9,321
  • 1
  • 39
  • 56
aja
  • 35
  • 4
  • Are you sure the error log is the one obtained after the `fit` of the `lda`with the `lsqr` solver ? Because in the error log it uses `_solve_eigen`, so you might have another part from your code that is generating the error with a classifier using `solver=eigen`. However, in both cases, the algorithm supports shrinkage so the probem of the `MemoryError` is not solved yet. I am working on it to come with a thourough answer ;) – MMF Jan 03 '17 at 14:32

0 Answers0