0

When I try to get the eigenvalue of a specific matrix, I get different answer using EigenSolver in Eigen Library and Matlab. Can someone tell me the reason?

C++ code:

Eigen::EigenSolver<Eigen::MatrixXd> solver_1(H_);
std::cout << "\nhes\n" << solver_1.eigenvalues() << std::endl;
std::cout << "\nhes1\n" << solver_1.pseudoEigenvalueMatrix().diagonal() << std::endl;

Matlab:

[Q1,L1] = eig(D)
hes = diag(L1)

C++ answer:

C++ answer

MATLAB answer:

Matlab answer

Wolfie
  • 27,562
  • 7
  • 28
  • 55
  • 1
    @rayryeng Eigenvalues are "unique"... i.e. they should be the same regardless of the method used to obtain them. – jodag Aug 09 '17 at 15:15
  • 2
    @jodag Yes, that's right. I'll delete the comment. However, the duplicate still applies. It should help the OP. – rayryeng Aug 09 '17 at 15:20
  • @rayryeng I'm not convinced that this is the same question or that the answer is satisfactory. In the linked question the OP got the exact same eigenvalues with different eigenvalues and it doesn't appear that any answers satisfactorily explain why. – jodag Aug 10 '17 at 02:06
  • @DavidWillo According to [LAPACK docs](http://www.netlib.org/lapack/lug/node89.html) the error bound for eigenvalues is proportional to the norm of the matrix (looking at `EERRBD = EPSMCH * ANORM`). One possibility is that the norm of your matrix is very large which creates rounding errors that are especially noticeable for the smaller eigenvalues. – jodag Aug 10 '17 at 02:11
  • @jodag When i try `std::cout << "\n test H_*Q = Q*L\n" << H_ * solver_1.pseudoEigenvectors() << "\n\n" << solver_1.pseudoEigenvectors()*solver_1.pseudoEigenvalueMatrix() << "\n\n";` I get two matrices that looks exactly the same, It's quite confusing because the eigenvalues i get from Eigen library satisfied the constraints. – David Willo Aug 10 '17 at 02:14
  • @jodag I think so.It is probably a rounding error. – David Willo Aug 10 '17 at 02:17
  • @DavidWillo The fact that some of the eigenvalues are many orders of magnitude larger than others imply that the matrix you are operating on is [poorly conditioned](https://en.wikipedia.org/wiki/Condition_number). This can result in the issues that you are seeing. – jodag Aug 10 '17 at 02:17
  • @DavidWillo I'm not sure how you are creating the matrix, but if it's created from real data then you probably want to make sure **at least** 21 samples go into constructing the matrix. If its still poorly conditioned then that implies that all of your data falls on, or very near a lower dimensional hyperplane. – jodag Aug 10 '17 at 02:19
  • @DavidWillo In response to the fact that both methods result in the same reconstruction, that is because the eigenvectors with large eigenvalues contribute significantly more than the others in the reconstruction. Setting all the eigenvalues below some threshold (maybe even 1000) to zero should result in nearly, if not the same, reconstruction. – jodag Aug 10 '17 at 02:25
  • @jodag Thanks for your answer! Yes the matrix is created in real time, actually it's an Hessian Matrix i get in a loop of my program, a bad result (like getting a minimal eigenvalue) will lead to algorithm fail. I think i should better modify my algorithm because this problem seems inevitable... – David Willo Aug 10 '17 at 02:43

0 Answers0