14

A manual that I am currently studying (I am a newbie) says:

"Numbers which differ by less than machine epsilon are numerically the same"

With Python, machine epsilon for float values can be obtained by typing

eps = numpy.finfo(float).eps

Now, If I check

1 + eps/10 != 1

I obtain False.

But If I check

0.1 + eps/10 != 0.1

I obtain True.

My latter logical expression turns to be False if I divide eps by 100. So, how does machine epsilon work? The Python documentation just says

"The smallest representable positive number such that 1.0 + eps != 1.0. Type of eps is an appropriate floating point type."

Thank you in advance.

ali_m
  • 71,714
  • 23
  • 223
  • 298
Charlie
  • 286
  • 1
  • 2
  • 9
  • 2
    The documentation is clear: The epsilon is relative to 1.0: "1.0 + eps != 1.0" Otherwise, you could not calculate _at all_ with numbers smaller than eps (2.2e-16 in my case). If the "reference" number is smaller, e.g. 0.1, then eps is smaller, too. – tobias_k Jan 05 '16 at 12:40
  • 1
    There are fixed maximal number of digits that a floating point value. To represent e.g. 0.100001 one less digit is needed than to represent e.g. 1.000001. – kfx Jan 05 '16 at 12:43
  • Do you have a reference for that Python doc quote? Those docs need fixing: that's not the correct definition. (For example, `1.2e-16` satisfies the condition of the definition, but epsilon is larger than that for the usual IEEE 754 binary64 floating-point format.) – Mark Dickinson Jan 05 '16 at 13:17
  • The reference: http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.finfo.html – Charlie Jan 05 '16 at 13:20
  • Ah, thanks. So that's the NumPy docs, not the Python docs. :-) (It should still be fixed, of course.) – Mark Dickinson Jan 05 '16 at 13:25
  • 1
    Opened https://github.com/numpy/numpy/issues/6940 to track the NumPy doc bug. – Mark Dickinson Jan 05 '16 at 14:09
  • 1
    Not to plug one of my own answers too much, but you might also have a look at: http://stackoverflow.com/questions/32465481/what-exactly-is-the-resolution-parameter-of-numpy-float/ Your question isn't quite a duplicate, but it's closely related. – Joe Kington Jan 05 '16 at 21:12

2 Answers2

12

In this case, you actually don't want np.finfo. What you're wanting is np.spacing, which calculates the distance between the input and the next largest number that can be exactly represented.

Essentially, np.spacing calculates "eps" for any given number. It uses the number's datatype (native python floats are 64-bit floats), so a np.float32 or np.float16 will give a different answer than a 64-bit float.

For example:

import numpy as np

print 'Float64, 1.0 -->', np.spacing(1.0)
print 'Float64, 1e12 -->', np.spacing(1e12)
print 'Float64, 1e-12 -->', np.spacing(1e-12)
print ''
print 'Float32, 1.0 -->', np.spacing(np.float32(1.0))
print 'Float32, 1e12 -->', np.spacing(np.float32(1e12))
print 'Float32, 1e-12 -->', np.spacing(np.float32(1e-12))

Which yields:

Float64, 1.0 --> 2.22044604925e-16
Float64, 1e12 --> 0.0001220703125
Float64, 1e-12 --> 2.01948391737e-28

Float32, 1.0 --> 1.19209e-07
Float32, 1e12 --> 65536.0
Float32, 1e-12 --> 1.0842e-19
Joe Kington
  • 275,208
  • 71
  • 604
  • 463
11

Floating point numbers have a certain precision, to a few decimal places in scientific notation. The larger the number, the larger the least significant digit in that representation, and thus the larger the "epsilon" that could contribute to that number.

Thus, the epsilon is relative to the number it is added to, which is in fact stated in the documentation you cited: "... such that 1.0 + eps != 1.0". If the "reference" number is smaller by, e.g. one order of magnitude, then eps is smaller, too.

If that was not the case, you could not calculate at all with numbers smaller than eps (2.2e-16 in my case).

tobias_k
  • 81,265
  • 12
  • 120
  • 179