1

I've been facing an interesting python problem. I've tried to inverse 3x3 matrix A

[[1 2 3]
[4 5 6]
[7 8 9]]

and then multiply it on initial one: A⁻ⁱA. Instead of identity matrix (with all diagonal elements equal one) I've got this one:

[[ 12.   8.   8.]
 [-16.  -8.   0.]
 [  4.   0.   0.]]

The problem occurs only in this specific case. Matrices with other values give right results. Here is the code:

import numpy as np
np.set_printoptions(precision=2,suppress=True)

A = np.array([1,2,3,4,5,6,7,8,9])
A = A.reshape(3,3)

print(A)
print(np.linalg.det(A))
print(np.matmul(np.linalg.inv(A),A))

Output:

[[1 2 3]
 [4 5 6]
 [7 8 9]]

6.66133814775094e-16

[[ 12.   8.   8.]
 [-16.  -8.   0.]
 [  4.   0.   0.]] 
martineau
  • 119,623
  • 25
  • 170
  • 301
iamgm
  • 41
  • 1
  • 8
  • 2
    The matrix has determinant 0 and is therefore not invertible. Python 3.6 is telling me that much when trying to invert: `numpy.linalg.LinAlgError: Singular matrix` – snwflk Aug 31 '20 at 22:21
  • 2
    This is a singular matrix. It doesn't have an inverse. – Diego Aug 31 '20 at 22:24

3 Answers3

2

As others have pointed out, a singular matrix is non-invertible, so you get a nonsense answer from A^-1 A.

Numpy includes a handy function to check the condition number

np.linalg.cond(A)
# 5.0522794445385096e+16

As wikipedia states, this is a measure of the sensitivity of the output value b in Ax = b for small change in the matrix values in A (kind of like a generalized derivative). The large value indicates that A is "il-conditioned", and can result in unstable values. This is intrinsic to the real-valued matrix but can be worsened by floating point arithmetic.

cond is more useful than looking at np.linalg.det(A) to know if your matrix will be well-behaved because its is not sensitive to the scale of values in A (whereas the norm and determinant are). As an example, here is a matrix with small values, but really has no issue with invertibility:

A = 1e-10*np.random.random(size=(3,3))

np.linalg.det(A)
# 2.128774239739163e-31
# ^^ this looks really bad...

np.linalg.cond(A)
# 8.798791503909136
# nevermind, it's probably ok

A_ident = np.matmul(np.linalg.inv(A), A)
np.linalg.norm(A_ident - np.identity(3))
# 5.392490230798587e-16
# A^(-1)*A is very close to the identity matrix, not il-conditioned.
anon01
  • 10,618
  • 8
  • 35
  • 58
  • I knew about singular matrices and tricky behavior of floating point numbers but wasn't sure that is the case. As far I understand to determine whether matrix invertable or not using `np.linalg.cond` function, I should watch that the returned value is significantly different from zero. Correct me if I misunderstood. – iamgm Sep 01 '20 at 06:34
  • that is correct: it is effectively a measure of sensitivity to change. – anon01 Sep 01 '20 at 06:36
1

Your matrix is not invertible, see e.g. wolfram alpha, which says that matrix is singular.

You may be misguided that Python printed a nonzero value of determinant (6.66133814775094e-16), however, this value is so close to 0, that you should treat it like that. The operations that computers do on floating point numbers usually are not completely accurate (see e.g. this question Why are floating point numbers inaccurate?) and that might have been the reason that the value of determinant was close to zero, but not exactly there.

lukeg
  • 4,189
  • 3
  • 19
  • 40
1

The determinant of this matrix is 0. since

import numpy as np
np.set_printoptions(precision=2,suppress=True)

A = np.array([1,2,3,4,5,6,7,8,9])
A = A.reshape(3,3)
# print determinant
print(np.linalg.det(A))

returns

[[1 2 3]
 [4 5 6]
 [7 8 9]]
0.0

you have a matrix that has no computable inverse.

neuops
  • 342
  • 3
  • 16