2

The result must be 806603.77 but why I get 806603.8 ?

float a = 855000.00f;
float b = 48396.23f;

float res = a - b;
Console.WriteLine(res);
Console.ReadKey();
mrd
  • 2,095
  • 6
  • 23
  • 48
  • 6
    if you want exact result you shoul use Decimal – Sergio Mar 15 '13 at 10:09
  • 6
    [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) – Oded Mar 15 '13 at 10:10
  • Float and Double are prone to rounding issues. Use `Decimal` as @Sergio recommended. You can find more information [here](http://stackoverflow.com/questions/618535/what-is-the-difference-between-decimal-float-and-double-in-c) – npinti Mar 15 '13 at 10:10
  • Not only is the result not what you want, `b` will not even contain an exact representation of the number you initialised it with. – JasonD Mar 15 '13 at 10:21
  • In this case, `double` would suffice. – leppie Mar 15 '13 at 10:30

4 Answers4

3

A float (also called System.Single) has a precision equivalent to approximately seven decimal figures. Your res difference needs eight significant decimal digits. Therefore it is to be expected that there is not enough precision in a float.

ADDITION:

Some extra information: Near 806,000 (806 thousand), a float only has four bits left for the fractional part. So for res it will have to choose between

806603 + 12/16 == 806603.75000000, and
806603 + 13/16 == 806603.81250000

It chooses the first one since it's closest to the ideal result. But both of these values are output as "806603.8" when calling ToString() (which Console.WriteLine(float) does call). A maximum of 7 significant decimal figures are shown with the general ToString call. To reveal that two floating-point numbers are distinct even though they print the same with the standard formatting, use the format string "R", for example

Console.WriteLine(res.ToString("R"));
Jeppe Stig Nielsen
  • 60,409
  • 11
  • 110
  • 181
3

You should use decimal instead because float has 32-bit with 7 digit precision only that is why the result differs, on other hand decimal has 128-bit with 28-29 digit precision.

decimal a = 855000.00M;
decimal b = 48396.23M;

decimal res = a - b;
Console.WriteLine(res);
Console.ReadKey();

Output: 806603.77

Vishal Suthar
  • 17,013
  • 3
  • 59
  • 105
1

Because float has limited precision (32 bits). Use double or decimal if you want more precision.

Tim Rogers
  • 21,297
  • 6
  • 52
  • 68
0

Please be aware that just blindly using Decimal isn't good enough.

Read the link posted by Oded: What Every Computer Scientist Should Know About Floating-Point Arithmetic

Only then decide on the appropriate numeric type to use.

Don't fall into the trap of thinking that just using Decimal will give you exact results; it won't always.

Consider the following code:

Decimal d1 = 1;
Decimal d2 = 101;
Decimal d3 = d1/d2;
Decimal d4 = d3*d2; // d4 = (d1/d2) * d2 = d1

if (d4 == d1)
{
    Console.WriteLine("Yay!");
}
else
{
    Console.WriteLine("Urk!");
}

If Decimal calculations were exact, that code should print "Yay!" because d1 should be the same as d4, right?

Well, it doesn't.

Also be aware that Decimal calculations are thousands of times slower than double calculations. They are not always suitable for non-currency calculations (e.g. calculating pixel offsets or physical things such as velocities, or anything involving transcendental numbers and so on).

Matthew Watson
  • 104,400
  • 10
  • 158
  • 276
  • Another funny fact to note is that since `decimal.MaxValue` is `79228...`, there is a higher precision for numbers whose significant digits start by something lower than this, than there is for number whose significant digits are higher. To see what I mean, consider `800m / 3m * 3m` versus `500m / 3m * 3m`. First note that neither 800 nor 500 is evenly dicisible by 3, so the quotient will have an infinite tail of `666...` which a `System.Decimal` will have to terminate with `...667` at some point. Still, because 800 is over the `79228...` limit, and 500 is under it, the precision differs. – Jeppe Stig Nielsen Mar 15 '13 at 10:44
  • I'm talking about the threshold `79228...` where precision changes from 29 to 28 significant digits. – Jeppe Stig Nielsen Mar 15 '13 at 10:46