-3

While learning about floating point arithmetic, I came across something, I quote: 'a float/double can't store 0.1 precisely".

There is a question on SO pointing the same thing and accepted answer is also very convincing. However I thought of trying it out on my own computer, so I wrote following program as below

double a = 0.1;

if (a == 0.1)
{
    Console.WriteLine("True");
}
else
{
    Console.WriteLine("False");
}

Console.Read();

and console printed True. Shocking to as I was already convinced with something else. Can anyone tell me what's going on with floating point arithmetic? Or I just got a computer that store numeric values as base 10?

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
Imad
  • 7,126
  • 12
  • 55
  • 112
  • 7
    Given that it's stored inaccurately *the same way* both times you wrote it why *wouldn't* they be equal? – jonrsharpe Jul 01 '18 at 07:55
  • https://en.wikipedia.org/wiki/Floating-point_arithmetic – TheGeneral Jul 01 '18 at 07:56
  • 2
    @jonrsharpe: I completely agree that when you think about it in the right way, it becomes obvious. But I think it's a reasonable question if you're not used to thinking about exactly what's going on. – Jon Skeet Jul 01 '18 at 08:04
  • @jonrsharpe the question I mentioned was doing the same thing, please correct me if I am wrong. – Imad Jul 01 '18 at 08:10
  • I get the correct results on my computer. Are you using Debug or Release? Does it give the same results for both Debug and Release? There are some microprocessors that have internal bugs that give wrong results. The Debug uses a simulators to perform the math while the Release uses the Floating Point Arithmetic Unit inside the micro. I've seen both fail. Some PC have patches to fix the bug in the FPU and the patches may be installed wrong depending on the Micro that is installed on PC. The answer should always be TRUE, except for the bugs. – jdweng Jul 01 '18 at 09:34
  • @jdweng: The OP is getting True as well, but didn't understand why. Given that there's no arithmetic involved here, I doubt that there's any relevance to FPU bugs. – Jon Skeet Jul 01 '18 at 10:59
  • @jdweng You have often repeated this false statement about the debug build. It doesn't simulate fpu ops in software. It is performed on hardware. As for these fpu bugs? You refer to the pentium fdiv bug. That ceased beibg relevant around 20 years ago. – David Heffernan Jul 02 '18 at 06:29

1 Answers1

7

Your program is only checking whether the compiler is approximating 0.1 in the same way twice, which it does.

The value of a isn't 0.1, and you're not checking whether it is 0.1. You're checking whether "the closest representable value to 0.1" is equal to "the closest representable value to 0.1".

Your code is effectively compiled to this:

double a = 0.1000000000000000055511151231257827021181583404541015625;

if (a == 0.1000000000000000055511151231257827021181583404541015625)
{
    Console.WriteLine("True");
}
else
{
    Console.WriteLine("False");
}

... because 0.1000000000000000055511151231257827021181583404541015625 is the double value that's closest to 0.1.

There are still times you can see some very odd effects. While double is defined to be a 64-bit IEEE-754 number, the C# specification allows intermediate representations to use higher precision. That means sometimes the simple act of assigning a value to a field can change results - or even casting a value which is already double to double.

In the question you refer to, we don't really know how the original value is obtained. The question states:

I've a double variable called x. In the code, x gets assigned a value of 0.1

We don't know exactly how it's assigned a value of 0.1, and that detail is important. We know the value won't be exactly 0.1, so what kind of approximation has been involved? For example, consider this code:

using System;

class Program
{
    static void Main()
    {
        SubtractAndCompare(0.3, 0.2);
    }

    static void SubtractAndCompare(double a, double b)
    {
        double x = a - b;
        Console.WriteLine(x == 0.1);
    }
}

The value of x will be roughly 0.1, but it's not the exact same approximation as "the closest double value to 0.1". In this case it happens to be slightly less than 0.1 - the value is exactly 0.09999999999999997779553950749686919152736663818359375, which isn't equal to 0.1000000000000000055511151231257827021181583404541015625... so the comparison prints False.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • Thanks, thats right, wasn't the question I mentioned was doing the same? – Imad Jul 01 '18 at 08:12
  • @Imad: In the question you linked to, the OP says "In the code, x gets assigned a value of 0.1" but never shows how that happens. If it's the result of arithmetic (e.g. subtracting 0.8 from 0.9) then it may well have a value which is near to 0.1 but not the same approximation. We can't really tell from the question. – Jon Skeet Jul 01 '18 at 08:14
  • @Imad: I've provided an example to help clarify. – Jon Skeet Jul 01 '18 at 08:25