0

Please pardon me if it is a dumb question. I was just exploring numpy in python shell and I ran these lines

>>> from numpy.linalg import inv
>>> from numpy import dot, transpose
>>> a = np.matrix('1,2;3,4')
>>> print a
[[1 2]
 [3 4]]
>>> print inv(a)
[[-2.   1. ]
 [ 1.5 -0.5]]
>>> print a.dot(inv(a))
[[1.0000000e+00 0.0000000e+00]
 [8.8817842e-16 1.0000000e+00]]

In the last line I was expecting value

[[1,0]
 [0,1]] 

(identity matrix), however I got some non-zero value at (1,0) pos.

cs95
  • 379,657
  • 97
  • 704
  • 746
g_p
  • 5,499
  • 3
  • 19
  • 22
  • You can't expect exact results when working with floating point numbers. Maybe some kind of norm `||a - I||²` will work. The first answer to the question posted by @GarbageCollector explains it pretty well. – Jose A. García Jan 26 '18 at 09:10
  • Also, you don't need to use `print` in python console. – Eric Duminil Jan 26 '18 at 09:11
  • All that being said, `numpy` *could* have been simplifying the expression and just returning an identity matrix of same rank. It would also speed things up for large matrices. – Ma0 Jan 26 '18 at 09:13
  • @Ev.Kounis: For this, you'd need to work with `sympy`, not `numpy`. – Eric Duminil Jan 26 '18 at 10:47

1 Answers1

2

This is a result of floating point arithmetic, and all the shortcomings associated with it. See Is floating point math broken? for a detailed understanding of why this happens.

Your result is not perfect, but it is close enough, and that is the key. inv of course, does not compute the exact result (it uses numerical methods after all). Correspondingly, A * A_inv isn't going to be exactly the identity matrix, but it will be close enough. You can test closeness using np.allclose.

np.allclose(a * inv(a), np.eye(2))
True

(Note that a * b computes the dot product for matrix objects.)

cs95
  • 379,657
  • 97
  • 704
  • 746