1

In Octave I obtain

1 - 0.05 -0.95 = 0

and

1 - 0.95 -0.05 = 4.1633e-17

I understand that it is caused by the order of evaluation combined with the approximate binary representation of 0.05 as 0.00(0011) and 0.95 as 0.11(1100) Could someone please give me the whole story or show me a link explaining it?

---EDIT: This question is not a duplicate of Why is 24.0000 not equal to 24.0000 in MATLAB?, which was identified by others as a possible duplicate. The latter deals with the rounded representation of a number. The former is asking for the details of the mechanism by which the order of execution of a calculation affects the precision of the result.

gciriani
  • 611
  • 2
  • 7
  • 19
  • 2
    Possible duplicate of [Why is 24.0000 not equal to 24.0000 in MATLAB?](https://stackoverflow.com/questions/686439/why-is-24-0000-not-equal-to-24-0000-in-matlab) – Cris Luengo Jan 18 '19 at 21:28
  • @Cris-Luengo, it is not. – gciriani Jan 19 '19 at 02:00
  • 1
    In that case, maybe you can expand on your question, it's not clear to me what you're asking if it's not described in the answer I linked. – Cris Luengo Jan 19 '19 at 05:14

2 Answers2

3

Matzeri's link to the definitive resource on floating point arithmetic is indeed the definitive answer to this question. However, for completion:

octave:34> fprintf("%.80f\n%.80f\n", 0.95, 1 - 0.05)
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000
0.94999999999999995559107901499373838305473327636718750000000000000000000000000000

octave:35> fprintf("%.80f\n%.80f\n", 0.05, 1 - 0.95)
0.05000000000000000277555756156289135105907917022705078125000000000000000000000000
0.05000000000000004440892098500626161694526672363281250000000000000000000000000000

In other words, 0.95 is less easy to represent precisely in floating point, so any calculation in the first step that involves 0.95 (either as an input or as an output) is necessarily less precise than one that only uses 0.05.

Therefore:

1 - 0.05 = 0.95 (imprecise, due to intrinsic floating-point representation)
(1 - 0.05) - 0.95 = exactly 0 (since both are represented identically imprecisely)

vs

1 - 0.95 = imprecise 0.05 (due to involvement of 0.95 in calculation)
(imprecise 0.05) - (precise 0.05) = not exactly 0 (due to difference in precisions)

HOWEVER. It should be pointed out that this difference in precision is well below the machine tolerance (as returned by eps -- 2.2204e-16 on my machine). Therefore, for all practical applications, 4.1633e-17 is 0. If the practical point here is testing whether the result of a calculation is effectively 0, then in practical terms one should always take machine precision into account when dealing with floating point calculations, or preferably find a way to reformulate your problem such that it avoids the need for equality testing altogether.

Tasos Papastylianou
  • 21,371
  • 2
  • 28
  • 57
  • I bumped and accepted your answer; you make a good argument. Matzeri's is waving "it must be here, it's complicated". He also didn't give hint at why the order of execution would cause the disparity. I have a counterexample using your line of reasoning. So there might be something else. Your argument would conclude that in 1-.35-.65 the number .35 is more precisely represented than .65. num2hex(.35)= 3fd6666666666666, and num2hex(.65)= 3fe4cccccccccccd. The same periodic pattern, and in this case the two calculations give the same result, as 1-.35-.65==1-.65-.35 is true. – gciriani Jan 22 '19 at 21:02
  • It's not really a counterexample. 0.05 is not more precise because it's "smaller". It just happens to require less least-significant digits when represented as floating point. E.g. 0.25 is 'larger' than 0.05, but can be represented as a power of two exactly. Whereas 0.35 and 0.65 are equally imprecise. Having said that, you should still not rely on equality testing, even when you know you are using numbers that are 'equally imprecise'. It's simply not reliable. – Tasos Papastylianou Jan 23 '19 at 00:33
  • 1
    Tasos, I'm not sure what "require less least-significant digits" mean. Both 0.35 and 0.05 have a periodic representation that in theory requires an infinite number of digits, as shown in their hexadecimal representation. It could be instead that in one case the rounding is done for both numbers in the same direction, and in the other case the rounding is done in opposite directions. I fully agree with you that equality testing should not be done between real numbers. – gciriani Jan 23 '19 at 15:57
2

The full explanation

What Every Computer Scientist Should Know About Floating-Point Arithmetic

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

matzeri
  • 8,062
  • 2
  • 15
  • 16