It is a unfortunate, I think. Near 139,000, a Decimal
has far better precision than a Double
. But still, because of this issue, we have different Double
s being projected onto the same Decimal
. For example
double doub1 = 138630.7838038626;
double doub2 = 138630.7838038628;
Console.WriteLine(doub1 < doub2); // true, values differ as doubles
Console.WriteLine((decimal)doub1 < (decimal)doub2); // false, values projected onto same decimal
In fact there are six different representable Double
values between doub1
and doub2
above, so they are not the same.
Here is a somewhat silly work-aronud:
static decimal PreciseConvert(double doub)
{
// Handle infinities and NaN-s first (throw exception)
// Otherwise:
return Decimal.Parse(doub.ToString("R"), NumberStyles.AllowExponent | NumberStyles.AllowDecimalPoint);
}
The "R"
format string ensures that enough extra figures are included to make the mapping injective (in the domain where Decimal
has superior precision).
Note that in some range, a long
(Int64
) has a precision that is superior to that of Double
. So I checked if conversions here are made in the same way (first rounding to 15 significant decimal places). They are not! So:
double doub3 = 1.386307838038626e18;
double doub4 = 1.386307838038628e18;
Console.WriteLine(doub3 < doub4); // true, values differ as doubles
Console.WriteLine((long)doub3 < (long)doub4); // true, full precision of double used when converting to long
It seems inconsistent to use a different "rule" when the target is decimal
.
Note that, near this value 1.4e18
, because of this, (decimal)(long)doub3
produces a more accurate result than just (decimal)doub3
.