3

I was using cern.colt.matrix.* lib for sparse matrix calculations ..but it seems that I keep running into this error:

Exception in thread "main" java.lang.IllegalArgumentException: matrix too large

I think this is because the constructor throws exception when nrows*ncols > INTEGER.max

api: http://acs.lbl.gov/software/colt/api/cern/colt/matrix/impl/SparseDoubleMatrix2D.html exception: IllegalArgumentException - if rows<0 || columns<0 || (double)columns*rows > Integer.MAX_VALUE.

My rows are: 5787 and cols are 418032.

This worked fine in matlab( matrix loads just fine and all operations work ). I wanted to know how can i resolve this problem ? Should i be using a diff sparse matrix lib or do i need to slice my matrices or store matrix as row vector of SparseDoubleMatrix1D

Thanks.

deepak
  • 41
  • 5

1 Answers1

2

You are running into an implementation problem with matrices here. I suspect you have to break up the matrix however you may find you need more memory than you have.

Depending on how sparse the matrix is you will need 19 GB just for this matrix.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • Thanks. But, memory is not a constraint for me ( have 30+g on powerful server ). The same logic works fine for the matlab code like: X = sparseread("smatrix.txt") [D, W ] = size(X ) What i'm concerned is why should this be throwing error on columns*rows > Integer.MAX_VALUE. ( that means no of elements < 2^31-1 !) Shouldn't it be something like rows < 2^31 and cols < 2^31 on a 32 bit machine. – deepak Jan 19 '12 at 21:27
  • The maximum size for a single array is Integer.MAX_VALUE (the largest signed 32-bit int value) This 2^31-1 – Peter Lawrey Jan 19 '12 at 21:59