9

Output of the following code:

var a = 0.1;
var count = 1;

while (a > 0)
{
    if (count == 323)
    {
        var isZeroA = (a * 0.1) == 0;
        var b = a * 0.1;
        var isZeroB = b == 0;

        Console.WriteLine("IsZeroA: {0}, IsZeroB: {1}", isZeroA, isZeroB);
    }

    a *= 0.1;
    ++count;
}

is


IsZeroA: False, IsZeroB: True

Strangely, when I put a breakpoint after if (count == 323) while debugging and put expression (a * 0.1) == 0 in Visual Studio Watch window, it reports that expression is true.

Does anyone know why expression a * 0.1 is not zero, but when assigned to a variable b, then b is zero?

Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
Stipo
  • 4,566
  • 1
  • 21
  • 37
  • Use decimal instead of double if you expect accurate results – Tim Schmelter Dec 04 '15 at 10:41
  • @TimSchmelter I would like to know why this is happening. If you have some expression which evaluates to one value and then assign that expression to a variable of the same type and then that variable evaluates to a different value. Why is this happening if both expression and variable are of the same type and use equal amount of storage space? – Stipo Dec 04 '15 at 10:45
  • 4
    Computers count with 2 fingers, not 10 fingers like you do. When you have 2 fingers then 0.1 cannot be expressed with a limited number of digits. Just like you cannot express 1/3 with 10 fingers. You'd have to write 0.3333... and you'll eventually run out of paper. If you multiply that number by 3 then you don't get 1, you get 0.9999... The amount of paper a computer uses is small, System.Double can store up to 15 accurate digits. – Hans Passant Dec 04 '15 at 10:50
  • @HansPassant Same thing happens when you use a number that is perfectly comfortable in two-finger counting - e.g. 0.5, 0.25, or 0.125. It just happens at a later step ([demo](http://ideone.com/6iJIpt)). – Sergey Kalinichenko Dec 04 '15 at 11:00
  • I can't think of a reason for this other than a bug in optimizer. C# tries to save on computation, decides that in order for `a * 0.1` to be zero `a` must be zero. This is true in mathematics, but not true in `double`s. It appears that the code sets `IsZeroA` based on the value of `a`, not based on the value of `a` times constant. – Sergey Kalinichenko Dec 04 '15 at 11:10
  • 2
    This is **not** a duplicate of "floating point math being broken" questions. – Sergey Kalinichenko Dec 04 '15 at 11:11
  • @HansPassant Actually, computers often count with 32 fingers, and differentiate between a "bent" and a "stretched" finger. You can count up to 1023 with 10 fingers (but be careful when showing someone the number "4" ...) – Marco13 Dec 04 '15 at 11:52

1 Answers1

9

This does not happen with my particular hardware and CLR version. Edit: Oh yes, it happen to me too, if I use "x86" (or "Any CPU" with "Prefer 32-bit" enabled) and "Debug" mode.

The reason why things like that may sometimes happen, is that the system may hold the value in a 80-bit CPU registry where it has "extra" precision. But when put into a real 64-bit Double, it changes value.

If you change into:

var isZeroA = (double)(a * 0.1) == 0;

then formally you really change nothing (a cast from double to double!), but in reality that may force the run-time to convert from 80-bit to 64-bit. Does it change the output for you? Edit: This "no-op" cast changes something for me! For more on such cast-to-self tricks with floating-point types in C#, see other thread Casting a result to float in method returning float changes result.

Note that Double arithmetic is not deterministic (i.e. the same calculation can give different results when repeated) because of these 64-bit/80-bit issues. See other thread Is floating-point math consistent in C#? Can it be?


The following simpler program also shows the issue in cases where it is present (at least on my system):

double j = 9.88131291682493E-324;
Console.WriteLine(j * 0.1 == 0);             // "False"
double k = j * 0.1;
Console.WriteLine(k == 0);                   // "True"

Console.WriteLine((double)(j * 0.1) == 0);   // "True", double-to-double cast!

You can even start with j = 1E-323 in that code. It leads to the same Double.


Reference: The often cited document What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg appears in the internet with an added section Differences Among IEEE 754 Implementations by an anonymous author (which is not Goldberg). This section, Differences Among IEEE 754 Implementations, explains the issue you see in a technical manner.

Also see x86 Extended Precision Format (Wikipedia page section) about this 80-bit format.

Community
  • 1
  • 1
Jeppe Stig Nielsen
  • 60,409
  • 11
  • 110
  • 181
  • 1
    Could you please try running [this example](http://ideone.com/6iJIpt) on your system, and see if you get a printout at some point? – Sergey Kalinichenko Dec 04 '15 at 11:16
  • 1
    @dasblinkenlight It depends on wheter I use x86 or x64, and whether I use Debug or Release. – Jeppe Stig Nielsen Dec 04 '15 at 11:24
  • Happens to me also in Release. – IS4 Dec 04 '15 at 11:42
  • It probably depends on multiple factors: Debug vs. Release, whether optimizations are enabled (`/optimize+`), compiler version, target platform, ... The lesson is to never ever rely on floating-point equality comparisons without using an epsilon. – Dirk Vollmar Dec 04 '15 at 13:21