0

I'm looking for a way to retrieve the inverse of a SparseMatrix using RowMajor storage with EIGEN 3.3.4. I found an answer to this question there, but I'm experiencing issues adapting it to my case.

The given way to do it is by using a Solver, and resolve Ax=I (I is identity), to retrieve the inverse of A.

What I do not understand is the following :

auto A_inv = solver.solve(I);

Looks like, in the author case, the result of solver.solve() is a SparseMatrix object. In mine, it is a Solve object and this is why I'm in trouble right now.

I'm using SimplicialLDLT as follow :

SimplicialLDLT<SparseMatrix<short, RowMajor> solver;
solver.compute(A);
SparseMatrix<short,RowMajor> identity(A.rows(), A.cols()); identity.setIdentity();
auto result = solver.solve(identity);
std::cout<<"INVERSE MATRIX : "<<result<<std::endl;

Which does not work since result is a Solve(I guess there's no << overload). What I'm looking for is a way to do :

SparseMatrix<short,RowMajor> matrix_result = result;

For now, the result of this conversion is an error : THIS_METHOD_IS_ONLY_FOR_COLUMN_MAJOR_MATRICES.

The question is : How do you setup a RowMajor solver, and how do you retrieve the result of this in a usable RowMajor matrix?

Thanks in advance.

With ColMajor

Using ColMajor for both the solver and rhs (lhs is still rowmajor, I can't change it), the result is wrong. The matrix A is :

A = 1 -1 0
    0  1 0
    0  0 1

The code is :

SparseMatrix<short,RowMajor> A;
//set A values
SparseMatrix<short,ColMajor> I(A.rows(),A.cols());I.setIdentity();
SimplicialLDLT<SparseMatrix<short,ColMajor>> solver;
solver.compute(A);
SparseMatrix<short,ColMajor> result = solver.solve(I);
std::cout<<result<<std::endl;

Displaying the wrong following :

1 0 0 
0 1 0
0 0 1

Where the expected result was :

1 1 0 
0 1 0
0 0 1

I'm probably doing something wrong there but I can't figure it out.

Solution : Change the solver to SparseLU, SimplicialLDLT is only working for symmetric matrices.

Wassim
  • 386
  • 2
  • 15
  • 1
    Result is likely dense. Are you sure you want that? – Avi Ginsburg Jan 22 '18 at 17:38
  • Thanks for answering. Yeah I can afford it for 1 matrix. I need exactly the result because of encapsulation ; I'm using multiple sparse matrix engines and for all of them, I need to retrieve the inverse matrix with the same call (this is why I can't use decomposition). – Wassim Jan 22 '18 at 20:26
  • 1
    If you really insist on inverting a sparse matrix (I don't what you are really doing, but I'm pretty sure there is an alternative without explicit inverse), then use a column major matrix and then copy it to a row-major one. Note that internally the sparse rhs will be converted to dense column vectors for the actual solving. The dense results will then be copied to the sparse result by pruning exact zeros. So this is gonna be very slow. – ggael Jan 22 '18 at 22:35
  • I tried your way, changing rhs AND the solver to ColMajor. It works in that it compiles and it returns a matrix ; but not the expected one, the result is false. I added details in main post, thanks for your help people. – Wassim Jan 23 '18 at 08:33
  • 1
    `SimplicialLDLT` only works for symmetric matrices (it will in fact just consider values of either the upper or lower triangular half) – chtz Jan 23 '18 at 10:44
  • Thanks for the answer, changing to SparseLU solved the problem. Adding it to main post. – Wassim Jan 23 '18 at 12:48
  • @ggael : the inverse matrix is only used in a product. I saw you suggested [someone](https://listengine.tuxfamily.org/lists.tuxfamily.org/eigen/2010/10/msg00016.html) to use LU or LLT decomposition but I'm not sure how to do it. – Wassim Jan 23 '18 at 13:13
  • If your equation read: `inv(A)*B` then do `SparseLU<...> lu(A); x = lu.solve(B);`. Is it helping? – ggael Jan 23 '18 at 20:45
  • Yeah it is. For instance if I want to do `A*B*C` with `B` inverse of `A`, I'll just do as you said and then `x*C`. If so, there's still a last thing, but I guess it's a gap from me : if we can't afford using the "real" inverse matrix because it's not really sparse, why don't we have the same problem for x? It's not really the topic anymore so don't bother with that, I'll try to understand it by myself. I guess I'll need to take a look to the inversion matrix algorithms in general case. Thanks for the help ! – Wassim Jan 24 '18 at 13:15

0 Answers0