4

The following expression returns false (e.g in Java and C#)

0.1 + 0.1 + 0.1 == 0.3

So we learned that we always compare doubles and floats like this

Math.abs(double1 - double2) < epsilon

But why does

0.1 + 0.1 == 0.2 returns true and 
0.1 + 0.1 + 0.1 == 0.3 returns false?

I know that it has something to do with the mantissa, but I don't understand it exactly.

Rahul Singh
  • 21,585
  • 6
  • 41
  • 56
MjeOsX
  • 375
  • 4
  • 12
  • [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – Steve Dec 29 '14 at 10:51
  • It means the imprecision in those particular values happens to match up correctly, more or less by luck. – Louis Wasserman Dec 29 '14 at 10:57
  • The `(double) 0.1` is slightly higher the 0.1, `(double) 0.2` is also slightly higher being 2 * 0.1, however the `(double) 0.3` is slightly lower so when you do 3 * 0.1 you get a number which higher than the `(double) 0.3` – Peter Lawrey Dec 29 '14 at 12:28

2 Answers2

7

Float/double are stored as binary fractions, not decimal fractions.

There are some numbers that cannot be represented fully with our decimal notation. For example, 1/3 in decimal notation is 0.3333333... The same thing happens in binary notation, except that the numbers that cannot be represented precisely are different. Among them is the number 1/10. In binary notation that is 0.000110011001100...

Since the binary notation cannot store it precisely, it is stored in a rounded-off way. Hence your problem.

You should not compare doubles in the way you do like: 0.1 + 0.1 + 0.1 == 0.3, because you never know how exactly they are stored in the memory and you will never know what will be the result of such comparison.

msporek
  • 1,187
  • 8
  • 21
  • This is a good explanation of floating point accuracy in general but doesn't actually answer the question. – Ant P Dec 29 '14 at 10:51
4

@msporek his explanation is right. Here is in detail at bit-level why it turns out false or true in both cases.

First, let's do 0.1 + 0.1 manually using the IEEE 754 floating point model:

    Dec    IEEE 754           52-bit mantisse
             ----------------------------------------------------
    0.1 =  1.1001100110011001100110011001100110011001100110011010 * 2^-4
    0.1 =  1.1001100110011001100110011001100110011001100110011010 * 2^-4
 +  -------------------------------------------------------------------
    0.2 = 11.0011001100110011001100110011001100110011001100110100 * 2^-4
        =  1.1001100110011001100110011001100110011001100110011010 * 2^-3

This is a perfect match, which means that converting 0.2 to IEEE 754 and the sum of 0.1 and 0.1 in IEEE 754 are bitwise equal. Now let's look at: 0.2 + 0.1

    Dec    IEEE 754            52-bit mantisse
             ----------------------------------------------------
    0.2 =  1.1001100110011001100110011001100110011001100110011010 * 2^-3
    0.1 =  1.1001100110011001100110011001100110011001100110011010 * 2^-4
 +  -------------------------------------------------------------------
    0.2 =  1.1001100110011001100110011001100110011001100110011010 * 2^-3
    0.1 =  0.1100110011001100110011001100110011001100110011001101 * 2^-3
 +  -------------------------------------------------------------------
    0.3 = 10.0110011001100110011001100110011001100110011001100111  * 2^-3
        =  1.00110011001100110011001100110011001100110011001100111 * 2^-2
        =  1.0011001100110011001100110011001100110011001100110100  * 2^-2
                                                              ^^^
                                                          These bits

Now, look at the last bits of the result of the addition: it is 100. While 0.3 should have had a 011 as last bits. (We will verify this with a test program below).

You might think now that a CPU has FPUs with 80 bits mantisse, that is right, and behavior is very situation and hardware dependent, I think. Chances are that it gets rounded to 52 bits of precision.

Extra check using a test program to produce the IEEE 754 representation in memory:
Now doing it with the computer gives this as result which is perfectly in agreement with what I did by hand:

        Dec    IEEE 754            52-bit mantisse
                 ----------------------------------------------------
        0.3 =  1.0011001100110011001100110011001100110011001100110011 * 2^-2
  0.2 + 0.1 =  1.0011001100110011001100110011001100110011001100110100 * 2^-2

Indeed: the last three bits are different.

Martijn Courteaux
  • 67,591
  • 47
  • 198
  • 287
  • wow that is exactly the explanation i searched for :) thank you – MjeOsX Dec 29 '14 at 14:00
  • 1
    0.1 is 1.100110011001100110011001100110011001100110011001101 x 2^-4 (ends in 1101, not 11001, due to rounding). But your conclusion is still OK. – Rick Regan Dec 29 '14 at 14:18
  • @RickRegan: Thanks! I fixed that. Now it perfectly agrees with the real results happening on my machine. – Martijn Courteaux Dec 29 '14 at 14:55
  • @MjeOsX: Please review this answer. I corrected an error in it. Basically the reason why it was false is the same, but some bits changed. If you think this answer was the most helpful, you can accept this answer. If the other answer helped you more, accept the other one. – Martijn Courteaux Dec 29 '14 at 14:56