0

Although the topic isn't quite new and relevant, I cannot figure out one thing.

When I use Python 2.7.8 (x86) IDLE on x64 Windows 7, Intel core i5 760, I found that my machine has odds in multiplying particular float/double numbers, but not all of them!

>>> print 0.01 == 0.1 * 0.1
False
>>> print (1/10.0) ** 2 == 0.01
False
>>> print 0.01 - 0.1 ** 2
-1.73472347598e-18

BUT:
>>> print 1/10.0 * 1/10.0 == 0.01
True
>>> print 0.015 == 0.05 * 0.3
True
>>> print 0.0002 == 0.01 * 0.02
True
>>> print 0.0001 == 0.01 * 0.01
True
... (and so on. Always true.)

(!) I am not talking about such difficult (for PC and... for me) operations like taking in power, log, exp. I'm talking about multiplying, divison, adding and substracting.

I read this and this to dive deeply. It says that actually using comparisons such like 1.1 == 1.1 can lead to false. In my case, it is not. Thus, my computer rounds this float representations into the same bits represetations.

Using Java 7 with Eclipse compiler gives me the same results. Thus, I can conclude that the result in my case depends only on my machine epsilone (register precision), which is less than 1.734723475976807e-18, but not on compiler and programming environment.

The question is why my machine fails to compare 0.01 == 0.1 * 0.1 whereas causing no disrepancy with other multiplying examples (I tried a lot to confuse my PC!) ?

A.J. Uppal
  • 19,117
  • 6
  • 45
  • 76
dizcza
  • 630
  • 1
  • 7
  • 19
  • Because IEEE 754 does not guarantee ultimate precision for arbitrary number of significant digits. Depending on how the result was calculated, it might be "incorrect" by a different fraction. – zerkms Sep 21 '14 at 23:00
  • 1
    `0.1 * 0.1` doesn't happen to round the way you want. The other computations you tried happened to round the way you want. There's nothing deeper to it than that. – user2357112 Sep 21 '14 at 23:07
  • @user2357112, why then 0.0002 == 0.01 * 0.02 leads to true? Isn't that rounded? – dizcza Sep 21 '14 at 23:09
  • 1
    Because that one just happens to round the way you wanted it to. When you tried to produce other examples like `0.01 != 0.1 * 0.1`, you didn't try hard enough. `0.4 * 0.9 != 0.36`. `0.7 * 0.8 != 0.56`. `0.2 * 0.4 != 0.08`. Since the computer is really doing all the math in binary rather than decimal, whether a calculation happens to round the way you want is something you shouldn't rely on. – user2357112 Sep 21 '14 at 23:13
  • @Ignacio Vazquez-Abrams, as you might have noticed, not always 0.1 + 0.2 != 0.3 for PC. And I think, when a programmer is writing a code and if he didn't confront with issues about 0.1 + 0.2 != 0.3 or 0.1 * 0.1 != 0.01, he won't care a lot about all his computational results with floating point numbers unless he detects one. I think in most cases people are used to check 0.1+0.2 == 0.3 instead of 0.1 + 0.2 - 0.3 < epsilon. – dizcza Sep 21 '14 at 23:16
  • @user2357112, I see. Okay, so it turns out that I should always use an epsilon notation to valid floating point results. I didn't think about it till now!.. I'm closing a topic as answered. – dizcza Sep 21 '14 at 23:27

1 Answers1

1

Look here for the answer to your question. The problem is that all floating point math is like this and is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double.

Instead, if you are using this for an actual program, use an epsilon, or a very very small number:

>>> import math
>>> epsilon = 0.1**5
>>> (math.fabs((0.1 * 0.1) - 0.01) < epsilon)
True
>>> 
Community
  • 1
  • 1
A.J. Uppal
  • 19,117
  • 6
  • 45
  • 76