The result must be 806603.77 but why I get 806603.8 ?
float a = 855000.00f;
float b = 48396.23f;
float res = a - b;
Console.WriteLine(res);
Console.ReadKey();
The result must be 806603.77 but why I get 806603.8 ?
float a = 855000.00f;
float b = 48396.23f;
float res = a - b;
Console.WriteLine(res);
Console.ReadKey();
A float
(also called System.Single
) has a precision equivalent to approximately seven decimal figures. Your res
difference needs eight significant decimal digits. Therefore it is to be expected that there is not enough precision in a float
.
ADDITION:
Some extra information: Near 806,000 (806 thousand), a float
only has four bits left for the fractional part. So for res
it will have to choose between
806603 + 12/16 == 806603.75000000, and
806603 + 13/16 == 806603.81250000
It chooses the first one since it's closest to the ideal result. But both of these values are output as "806603.8"
when calling ToString()
(which Console.WriteLine(float)
does call). A maximum of 7 significant decimal figures are shown with the general ToString
call. To reveal that two floating-point numbers are distinct even though they print the same with the standard formatting, use the format string "R"
, for example
Console.WriteLine(res.ToString("R"));
You should use decimal instead because float has 32-bit
with 7 digit precision only that is why the result differs, on other hand decimal
has 128-bit
with 28-29 digit precision.
decimal a = 855000.00M;
decimal b = 48396.23M;
decimal res = a - b;
Console.WriteLine(res);
Console.ReadKey();
Output: 806603.77
Because float
has limited precision (32 bits). Use double
or decimal
if you want more precision.
Please be aware that just blindly using Decimal isn't good enough.
Read the link posted by Oded: What Every Computer Scientist Should Know About Floating-Point Arithmetic
Only then decide on the appropriate numeric type to use.
Don't fall into the trap of thinking that just using Decimal will give you exact results; it won't always.
Consider the following code:
Decimal d1 = 1;
Decimal d2 = 101;
Decimal d3 = d1/d2;
Decimal d4 = d3*d2; // d4 = (d1/d2) * d2 = d1
if (d4 == d1)
{
Console.WriteLine("Yay!");
}
else
{
Console.WriteLine("Urk!");
}
If Decimal calculations were exact, that code should print "Yay!" because d1 should be the same as d4, right?
Well, it doesn't.
Also be aware that Decimal calculations are thousands of times slower than double calculations. They are not always suitable for non-currency calculations (e.g. calculating pixel offsets or physical things such as velocities, or anything involving transcendental numbers and so on).