In C# 4.0, the following cast behaves very unexpectedly:
(decimal)1056964.63f
1056965
Casting to double works just fine:
(double)1056964.63f
1056964.625
(decimal)(double)1056964.63f
1056964.625
Is this by design?
In C# 4.0, the following cast behaves very unexpectedly:
(decimal)1056964.63f
1056965
Casting to double works just fine:
(double)1056964.63f
1056964.625
(decimal)(double)1056964.63f
1056964.625
Is this by design?
The problem is with your initial value - float
is only accurate to 7 significant decimal digits anyway:
float f = 1056964.63f;
Console.WriteLine(f); // Prints 1056965
So really the second example is the unexpected one in some ways.
Now the exact value in f
is 1056965.625, but that's the value given for all values from about 1056964.563 to 1056964.687 - so even the ".6" part isn't always correct. That's why the docs for System.Single
state:
By default, a Single value contains only 7 decimal digits of precision, although a maximum of 9 digits is maintained internally.
The extra information is still preserved when you convert to double
, because that's can preserve it without "interpreting" it at all - where converting it to a decimal form (either to print or for the decimal
type) goes through code which knows it can't "trust" those last two digits.
It is by design. Float can hold your number [edit]quite accurate[/edit], but for conversion purposes to it rounds it up to nearest integer, because there are only few representable float values between your number and integer (1056964.75 and 1056964.88). See COMNumber::FormatSingle and COMDecimal::InitSingle from SSCLI.