0

I've a matrix in python:

x = np.array([[-1, 2, 3],
          [4, 5, 6],
          [7, 8, 9],
          [50, 23, -30],
          [-23, 23, -40],
          [-233, 0, 234]], dtype=np.float64)  

but when I look at the eigenvalues of this matrix multiplied by its transpose, I get some that are negative:

np.linalg.eigvals(x@(x.T))  >= 0

gives in fact:

array([ True, False,  True,  True,  True,  True])

Do you have an explanation for this problem?

Blackoli
  • 33
  • 5
  • 3
    This is probably one of the zero eigenvalues, shifted slightly away from zero in the negative direction due to numerical rounding error. If you look at the actual eigenvalues, it would be something very small like -3e-16 – Nick Alger Feb 08 '22 at 19:26
  • 1
    You can print the eigenvalues to see that the culprit is `-1.36e-13`, which is far, far too small to be considered significant. See this question [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – Kraigolas Feb 08 '22 at 19:27
  • Thank you for your answers! Indeed the problem comes from the error, but would you have a solution to solve this problem given that I would like to solve a QP problem with cvxpy which thus returns to me that the matrix is not positive... – Blackoli Feb 08 '22 at 19:36
  • I think we would need more details about what you are doing in order to help with that. What quadratic program are you trying to solve, and why are you forming XX^T instead of just working with X? Even zero eigenvalues in constraint matrices may cause problems, if I recall correctly. – Nick Alger Feb 08 '22 at 23:26

0 Answers0