0

I have read that minimal float value that Python support is something in power -308. Exact number doesn't matter because:

>>> -1.42108547152e-14 + 360.0 == 360.0
True

How is that? I have CPython 2.7.3 on Windows.

It cause me errors. My problem will be fixed if I will compare my value -1.42108547152e-14 (computed somehow) to some "delta" and do this:

if v < delta:
    v = 0

What delta should I choose? In other words, with values smaller than what this effect will occur?

Please note that NumPy is not available.

skaurus
  • 1,581
  • 17
  • 27
  • 5
    Suggested background reading: [_What Every Computer Scientist Should Know About Floating-Point Arithmetic_](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – martin clayton Jan 29 '13 at 23:56
  • 1
    Take a look at [`sys.float_info`](http://docs.python.org/2/library/sys.html#sys.float_info); it won't tell you much though unless you understand fp in more detail, see the article Martin is pointing you to. – Martijn Pieters Jan 29 '13 at 23:57
  • 1
    This question is about Java, but my answer there should be applicable: http://stackoverflow.com/a/6837237/5987 – Mark Ransom Jan 29 '13 at 23:58
  • Well, I'm aware about floating point precision (but not expert in any way), but that still taken me by surprise. I expected that values so far from minimal supported value won't cause problems. Then sys.float_info.epsilon is a safe bet? – skaurus Jan 30 '13 at 00:01
  • 2
    Specifically with python, read http://docs.python.org/2/tutorial/floatingpoint.html , and look at [`decimal`](http://docs.python.org/2/library/decimal.html) – forivall Jan 30 '13 at 00:06
  • Thanks for pointing to decimal, it might be useful in other project. RIght now I have to stick with deltas (or epsilons) though, and sys.float_info.epsilon is too small - around 1e-16. 1e-9 is working in that particular case and I'll go with 1e-5 to be safe (I hope). – skaurus Jan 30 '13 at 00:18
  • Guys, every of you provided me with useful info and neither of you written answer :-) So I can't accept anything, but thanks nevertheless. – skaurus Jan 30 '13 at 00:20
  • 1
    The tiny epsilon is only for numbers very close to 0. It's pretty safe to think of floats as being limited to about 15 significant figures – John La Rooy Jan 30 '13 at 00:35
  • Numpy's not available in your case, but when it is [`np.spacing`](http://docs.scipy.org/doc/numpy/reference/c-api.coremath.html#npy_spacing) can also be useful. – Danica Jan 30 '13 at 00:45

2 Answers2

1

An (over)simplified explanation is: A (normal) double-precision floating point number holds (what is equivalent to) approximately 16 decimal digits. Let's try to do your addition by hand:

 360.0000000000000000000000000
-  0.0000000000000142108547152
______________________________
 359.9999999999999857891452848

If you round this to 16 figures (3 before the point and 13 after), you get 360.

Now, in reality this is done in binary. The "16 decimal digits" is therefore not a precise rule. In reality, the precision here (between 256.0 and 512.0) is 44 binary digits for the fractional part of the number. So the number closest to 360 which can be represented is 360 minus {2 to the -44th power}, which gives:

 359.9999999999999431565811391 (truncated)

But since our result before was closer to 360.0 than to this number, 360.0 is what you get.

Jeppe Stig Nielsen
  • 60,409
  • 11
  • 110
  • 181
0

Most processors use IEEE 754 binary floating-point arithmetic. In this format, numbers are represented as a sign s, a fraction f, and an exponent e. The fraction is also called a significand.

The sign s is a bit 0 or 1 representing + or –, respectively.

In double precision, the significand f is a 53-bit binary numeral with a radix point after the first bit, such as 1.10100001001001100111000110110011001010100000001000112.

In double precision, the exponent e is an integer from –1022 to +1023.

The combined value represented by the sign, significand, and exponent is (-1)s•2ef.

When you add two numbers, the processor figures out what exponent to use for the result. Then, given that exponent, it figures out what fraction to use for the result. When you add a large number and a small number, the entire result will not fit into the significand. So, the processor must round the mathematical result to something that will fit into the significand.

In the cases you ask about, the second added number is so small that the rounding produces the same value as the first number. In order to change the first number, you must add a value that is at least half the value of the lowest bit in the significand of the first number. (When rounding, if part that does not fit in the significand is more than half the lowest bit of the significand, it is rounded up. If it is exactly half, it is rounded up if that makes the lowest bit zero or down if that makes the lowest bit zero.)

There are additional issues in floating-point, such as subnormal numbers, infinities, how the exponent is stored, and so on, but the above explains the behavior you asked about.

In the specific case you ask about, adding to 360, adding any value greater than 2-45 will produce a sum greater than 360. Adding any positive value less than or equal to 2-45 will produce exactly 360. This is because the highest bit in 360 is 28, so the lowest bit in its significand is 2-44.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312