1

I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg:

static void Main(string[] args)
    {
        double x = 0, y = 0;

        x += 20013.8;
        x += 20012.7;

        y += 10016.4;
        y += 30010.1;

        Console.WriteLine("Result: "+ x + " " + y + " " + (x==y));
        Console.Write("Press any key to continue . . . "); Console.ReadKey(true);
    }

However, in this case, x and y are equal in the end.

Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.

mcoolbeth
  • 467
  • 1
  • 6
  • 19
  • 2
    `3/3 != (1/3)*3`? I'm not sure this is what you're looking for, and if it actually yields the expected `1 != 0.999999...` – ANeves Apr 15 '10 at 17:21

9 Answers9

5
double x = (0.1 * 3) / 3;
Console.WriteLine("x: {0}", x); // prints "x: 0.1"
Console.WriteLine("x == 0.1: {0}", x == 0.1); // prints "x == 0.1: False"

Remark: based on this don't make the assumption that floating point arithmetic is unreliable in .NET.

Darin Dimitrov
  • 1,023,142
  • 271
  • 3,287
  • 2,928
2

Here's an example based on a prior question that demonstrates float arithmetic not working out exactly as you would think.

float f = (13.45f * 20);
int x = (int)f;
int y = (int)(13.45f * 20);
Console.WriteLine(x == y);

In this case, false is printed to the screen. Why? Because of where the math is performed versus where the cast to int is happening. For x, the math is performed in one statement and stored to f, then it is being cast to an integer. For y, the value of the calculation is never stored before the cast. (In x, some precision is lost between the calculation and the cast, not the case for y.)

For an explanation behind what's specifically happening in float math, see this question/answer. Why differs floating-point precision in C# when separated by parantheses and when separated by statements?

Community
  • 1
  • 1
Anthony Pegram
  • 123,721
  • 27
  • 225
  • 246
  • How about `float f1 = 1.0; float f2 = 10.0; double d = f1/f2;`. Even though float-to-double conversions are performed implicitly with no warning or typecast, and double-to-float precisions require an explicit typecast, the former will often produce erroneous results, while the latter seldom will. – supercat May 31 '12 at 16:41
2

My favourite demonstration boils down to

double d = 0.1;
d += 0.2;
d -= 0.3;

Console.WriteLine(d);

The output is not 0.

AakashM
  • 62,551
  • 17
  • 151
  • 186
1

Try making it so the decimal is not .5.

Take a look at this article here

http://floating-point-gui.de/

Justen
  • 4,859
  • 9
  • 44
  • 68
0

try sum VERY big and VERY small number. small one will be consumed and result will be same as large number.

Andrey
  • 59,039
  • 12
  • 119
  • 163
0

Try performing repeated operations on an irrational number (such as a square root) or very long length repeating fraction. You'll quickly see errors accumulate. For instance, compute 1000000*Sqrt(2) vs. Sqrt(2)+Sqrt(2)+...+Sqrt(2).

Dan Bryant
  • 27,329
  • 4
  • 56
  • 102
0

The simplest I can think of right now is this:

class Test
{
    private static void Main()
    {
        double x = 0.0;

        for (int i = 0; i < 10; ++i)
            x += 0.1;

        Console.WriteLine("x = {0}, expected x = {1}, x == 1.0 is {2}", x, 1.0, x == 1.0);
        Console.WriteLine("Allowing for a small error: x == 1.0 is {0}", Math.Abs(x - 1.0) < 0.001);
    }
}
IVlad
  • 43,099
  • 13
  • 111
  • 179
0

I suggest that, if you're truly interested, you take a look any one of a number of pages that discuss floating point numbers, some in gory detail. You will soon realize that, in a computer, they're a compromise, trading off accuracy for range. If you are going to be writing programs that use them, you do need to understand their limitations and problems that can arise if you don't take care. It will be worth your time.

Larry
  • 314
  • 1
  • 5
-1

double is accurate to ~15 digits. You need more precision to really start hitting problems with only a few floating point operations.

Billy ONeal
  • 104,103
  • 58
  • 317
  • 552