75
assert(0.1 + 0.2 != 0.3); // shall be true

is my favorite check that a language uses native floating point arithmetic.

C++

#include <cstdio>

int main()
{
   printf("%d\n", (0.1 + 0.2 != 0.3));
   return 0;
}

Output:

1

http://ideone.com/ErBMd

Python

print(0.1 + 0.2 != 0.3)

Output:

True

http://ideone.com/TuKsd

Other examples

Why is this not true for D? As understand D uses native floating point numbers. Is this a bug? Do they use some specific number representation? Something else? Pretty confusing.

D

import std.stdio;

void main()
{
   writeln(0.1 + 0.2 != 0.3);
}

Output:

false

http://ideone.com/mX6zF


UPDATE

Thanks to LukeH. This is an effect of Floating Point Constant Folding described there.

Code:

import std.stdio;

void main()
{
   writeln(0.1 + 0.2 != 0.3); // constant folding is done in real precision

   auto a = 0.1;
   auto b = 0.2;
   writeln(a + b != 0.3);     // standard calculation in double precision
}

Output:

false
true

http://ideone.com/z6ZLk

StayOnTarget
  • 11,743
  • 10
  • 52
  • 81
Stas
  • 11,571
  • 9
  • 40
  • 58
  • 13
    Please put relevant code examples directly in the question and not at external links. Both to make sure that the full information in the question survives and to make it easier to read. – Anders Abel Jul 29 '11 at 14:10
  • 6
    I was going to reflexively click the close button until I noticed you wrote `==` instead of `!=`. – dan04 Jul 29 '11 at 14:10
  • 2
    Regarding your update: This is not a "problem" with the compiler optimiser. It's legal floating-point behaviour, and the possibility of this happening is explained in the ["Floating Point Constant Folding" section](http://www.d-programming-language.org/float.html) of the D documentation. – LukeH Jul 29 '11 at 14:34
  • 1
    Please look at what happens when you use the `real` type instead of the `double` type: http://ideone.com/NAXkM – Jean Hominal Jul 29 '11 at 14:34
  • @Jean Hominal: Case with real type is interesting. Thinking... – Stas Jul 29 '11 at 14:44
  • @Anders Abel: Added code examples, but for C++ and Python only. Java and C# are too verbose imho :) – Stas Aug 01 '11 at 10:56
  • Computerphile has an amazing video explaining floating points https://www.youtube.com/watch?v=PZRI1IfStY0 – Felipe Sabino Jun 01 '14 at 18:23
  • It happens also in Ruby – Furkan Ayhan Jun 01 '14 at 19:43

3 Answers3

53

(Flynn's answer is the correct answer. This one addresses the problem more generally.)


You seem to be assuming, OP, that the floating-point inaccuracy in your code is deterministic and predictably wrong (in a way, your approach is the polar opposite of that of people who don't understand floating point yet).

Although (as Ben points out) floating-point inaccuracy is deterministic, from the point of view of your code, if you are not being very deliberate about what's happening to your values at every step, this will not be the case. Any number of factors could lead to 0.1 + 0.2 == 0.3 succeeding, compile-time optimisation being one, tweaked values for those literals being another.

Rely here neither on success nor on failure; do not rely on floating-point equality either way.

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
  • 25
    That's a very good point - you can't rely on floating point arithmetic to give you the wrong answer! :-) – Steve Morgan Jul 29 '11 at 14:30
  • 8
    Floating-point inaccuracy DOES yield deterministic, predictable answers... as long as you use sequence points and assignments to variables to force rounding at every step. And, beware of compiler options which will eliminate rounding, for example with MSVC `/fp:precise` should be used. – Ben Voigt Jul 29 '11 at 19:55
  • 7
    This is a terrible explanation. IEEE 754 unambiguously define basic operations including `+`. The problem here is one of programming language, not of floating-point. Also, floating-point equality is perfectly defined. You shouldn't use it when it's not what you want, that's all. – Pascal Cuoq Jul 30 '11 at 18:06
  • @Pascal: IEEE 754 does. D does not. You assert that "the problem here is one programming language", and... you're right! If you look at the question *really* closely, you'll see that it is tagged `d`, not `IEEE 754`. I really hope that helps you understand the question. – Lightness Races in Orbit Aug 01 '11 at 01:25
  • @Ben: Sure, if you control all of those factors. My answer does presume that the programmer doesn't do that. I edited my answer to word that better. – Lightness Races in Orbit Aug 01 '11 at 15:11
47

It's probably being optimized to (0.3 != 0.3). Which is obviously false. Check optimization settings, make sure they're switched off, and try again.

Flynn1179
  • 11,925
  • 6
  • 38
  • 74
  • 27
    Wait, why would the compiler do decimal floating point calculation and the runtime do binary floating point calculation? – Jean Hominal Jul 29 '11 at 14:14
  • Good point. The funny thing is, I just tried this, and I'm getting false; I can't repro the OP's result myself. I'm compiling to 32 bit though, I'm wondering if 64 bit makes a difference. – Flynn1179 Jul 29 '11 at 14:15
  • 13
    This is the correct answer. See the "Floating Point Constant Folding" section of http://www.d-programming-language.org/float.html. – LukeH Jul 29 '11 at 14:17
  • Well, I just tried asserting 0.1f + 0.2f != 0.3f and that does evaluate to true. – Flynn1179 Jul 29 '11 at 14:18
  • 1
    Definately something with optimization. Tried the same with variables and got true: http://ideone.com/zO4OD – bezmax Jul 29 '11 at 14:19
  • From [the link I posted in my comment](http://www.d-programming-language.org/float.html): "Different compiler settings, optimization settings, and inlining settings can affect opportunities for constant folding, therefore the results of floating point calculations may differ depending on those settings." – LukeH Jul 29 '11 at 14:21
  • 3
    Heh, I just re-read the question; I thought by 'D', you meant the fourth example in that list; I was trying to repro it in C#! – Flynn1179 Jul 29 '11 at 14:21
  • Interestingly IEEE 754 has a new(ish) data type called decimal32 (and decimal64) . Most people think of floating point as "binary32" and "binary64" as defined by spec. The useful types ( what many people "mean" by floating point ) is actually decimal64. Does D's language allow specification of whether a float is "binary" or "decimal", and does it include that for constants? Using "0.2" in a language without any type specifier for the constant seems like part of the problem. – Brian Bulkowski Nov 14 '15 at 17:38
5

According to my interpretation of the D language specification, floating point arithmetic on x86 would use 80 bits of precision internally, instead of only 64 bits.

One would have to check however that that is enough to explain the result you observe.

Jean Hominal
  • 16,518
  • 5
  • 56
  • 90
  • 2
    Woah, @Tomalak, my head's just exploded ;-) – Steve Morgan Jul 29 '11 at 14:32
  • 2
    @Tomalak: as are 0.2 and 0.3 - but rounding with 80-bit of precision instead of 64 could make the value "equal" instead of distinct. And I have just checked with variables with the real type, and it evaluates to false again: http://ideone.com/sIFgk – Jean Hominal Jul 29 '11 at 14:32