The following C# code:
int n = 3;
double dbl = 1d / n;
decimal dec = 1m / n;
Console.WriteLine(dbl * n == 1d);
Console.WriteLine(dec * n == 1m);
outputs
True
False
Obviously, neither double
nor decimal
can represent 1/3 exactly. But dbl * n
is rounded to 1 and dec * n
is not. Why? Where is this behaviour documented?
UPDATE
Please note that my main question here is why they behave differently. Presuming that the choice of rounding was a conscious choice made when IEEE 754 and .NET were designed, I would like to know what were the reasons for choosing one type of rounding over the other. In the above example double
seems to perform better producing the mathematically correct answer despite having fewer significant digits than decimal
. Why did the creators of decimal
not use the same rounding? Are there scenarios when the existing behaviour of decimal
would be more beneficial?