0

I am getting a very strange value for my (1,1) entry for my BinvA matrix
I am just trying to invert B matrix and do a (B^-1)A multiplication.

I understand that when I do the calculation by hand my (1,1) is supposed to be 0 but instead I get 1.11022302e-16. How can I fix it? I know floating point numbers can't be represented to full accuracy but why is this giving me such an inaccurate response and not rounding to 0 is there any way I can make it more accurate?

Her is my code:

import numpy as np

A = np.array([[2,2],[4,-1]],np.int)
A = A.transpose()


B = np.array([[1,3],[-1,-1]],np.int)
B = B.transpose()

Binv = np.linalg.inv(B) #calculate the inverse

BinvA = np.dot(Binv,A) 
print(BinvA)

My print statement:

[[  1.11022302e-16  -2.50000000e+00]
 [ -2.00000000e+00  -6.50000000e+00]]
user2304916
  • 7,882
  • 5
  • 39
  • 53
FireFistAce
  • 41
  • 3
  • 12
  • what is wrong this precision? are you trying to solve a linear equation? do you want to round all entries? – Moj Jul 02 '14 at 17:57
  • @Moj Well the precision is the problem I am concerned if a simple matrix multiplication can't be rounded to reasonable accuracy, that more complicated calculations will be very in accurate. I am working on a code to calculate transition and projection matrices this was just to test how well I was multiplying them. – FireFistAce Jul 02 '14 at 18:04

2 Answers2

11

When you compute the inverse your arrays are converted in float64, whose machine epsilon is 1e-15. The epsilon is the relative quantization step of a floating-point number.

When in doubt we can ask numpy information about a floating-point data type using the finfo function. In this case

np.finfo('float64')
finfo(resolution=1e-15, 
      min=-1.7976931348623157e+308, max=1.7976931348623157e+308, 
      dtype=float64)

So, technically, your value being smaller than eps is a very accurate representation of 0 for a float64 type.

If it is only the representation that bothers you, you can tell numpy to don't print small floating point numbers (1 eps or less from 0) with:

np.set_printoptions(suppress=True)

After that your print statement returns:

[[ 0.  -2.5]
 [-2.  -6.5]]

Note that this is a general numerical problem common to all the floating-point implementations. You can find more info about floating-point rounding errors on SO:

or on the net:

Community
  • 1
  • 1
user2304916
  • 7,882
  • 5
  • 39
  • 53
  • But what about when I want to do more computational physics type calculations like for example calculate the energy of particle like an electron which usually can be 1.6 e-19 J in other cases even lower like 6.626 e-34 how do I represent these ? – FireFistAce Jul 02 '14 at 18:12
  • 1
    As you can see from my answer, epsilon is not the minimum floating point number. The minimum representable value in float64 is `min=-1.7976931348623157e+308` (see `finfo` output), but the relative accuracy will be 1e-15. – user2304916 Jul 02 '14 at 18:15
  • this part is a little confusing for so let me see if I have it right 1e-15 means the number of decimals so it accurate to about 15 decimals. But what if my calculation says requires lets 5 significant figures but they involve multiplying really small numbers like dot(1.6 e-19,6.626 e-34)are you saying this would still give me an accurate result up to 15 significant figures ? – FireFistAce Jul 02 '14 at 18:23
  • 1
    1e-15 is the minimum representable relative difference between two float64 numbers. You can't represent a real number, you get only an approximation and the error is called rounding error. If you perform a sequence of operations your rounding error can easily amplify, so your accuracy will be worst that 1e-15. There is a large field called [numerical analysis](http://en.wikipedia.org/wiki/Numerical_analysis) that studies the propagation of rounding errors and how to limit it. – user2304916 Jul 02 '14 at 18:30
  • 2
    Note **relative** difference. If you subtract two numbers around 1e-200 the minimum representable difference will be about 1e-215. Think of a binary form of scientific notation with 53 significant bits. – Patricia Shanahan Jul 02 '14 at 20:38
  • @Patricia Shanahan thanks that clarified it a bit because I was assuming in python and numpy that scientific notation was equivalent to significant figures so just to clarify using scientific notation I can do calculations up to about 1e-308 after which I get underflow and it will only be accurate up to 15 significant figures? – FireFistAce Jul 03 '14 at 10:04
  • 1
    @FireFistAce Yes, that is about it. As a detail, IEEE-754 binary floating point supports graceful underflow, which allows, at reducing precision, even smaller numbers. – Patricia Shanahan Jul 03 '14 at 12:34
1

This isn't a complete answer, but it may point you in the right direction. What you really want are numpy arrays that use Decimals for math. You might reasonably think to try:

import numpy as np
from decimal import Decimal
A = np.array([[2,2],[4,-1]],np.int)
for i, a in np.ndenumerate(A):
    A[i] = Decimal(a)
    print type(A[i])

But alas, Decimals are not among the datatypes supported out of the box in numpy, so each time you try to jam a Decimal into the array, it re-casts it as a float.

One possibility would be to set the datatype, thus:

def decimal_array(arr):
    X = np.array(arr, dtype = Decimal)
    for i, x in np.ndenumerate(X): X[i] = Decimal(x)
    return X

A = decimal_array([[2,2],[4,-1]])
B = decimal_array([[1,3],[-1,-1]])

A = A.transpose()
B = B.transpose()
Binv = np.linalg.inv(B) #calculate the inverse

But now, if you

print Binv.dtype

you'll see that the inversion has recast it back to float. The reason is that linalg.inv (like many other functions) looks for B's "common_type," which is the scalar to which it believe it can force your array elements.

It may not be hopeless, though. I looked to see if you could solve this by creating a custom dtype, but it turns out that scalars (ints, floats, etc) are not dtypes at all. Instead, what you probably want to do is register a new scalar--that's the Decimal--as it says in the article on scalars. You'll see a link out to the Numpy C-API (don't be afraid). Search the page for "register" and "scalar" to get started.

rcs
  • 11
  • 3
  • I am curious about why you are recommending decimal for this. It looks from the question and comments as though the context is scientific calculations, for which there is nothing special about base 10. Do the calculations in question really require more than 53 significant bits? – Patricia Shanahan Jul 03 '14 at 12:36
  • Python's [decimal](https://docs.python.org/2/library/decimal.html) package has the nice property that it "works in the same way as the arithmetic that people learn at school,” including being robust to eps weirdness with float arithmetic. But admittedly, the accepted answer may well be a better solution to the OP's question, because it's so quick. – rcs Jul 04 '14 at 19:07
  • How does decimal prevent "eps weirdness" when doing calculations such as matrix inversion that require general division? – Patricia Shanahan Jul 04 '14 at 19:16