0

Double Precision is: 15-16 digits.

Decimal Precision is: 28-29 significant digits.

so we can convert decimal to double implicitly.But consider this code:

double x = 100.3;
decimal y = 10.2;
 x = y;
 y = x;

x=y and y=x both of them has compile time error.

Why we can not convert decimal to double or double to decimal implicitly?

Please read :double to decimal and decimal to double

2 Answers2

2

The idea between these 2 types are completely different. See this blog of Eric Lippert.

Edit:

Quote of blog: "There cannot be an implicit conversion from double to decimal because of the range discrepancy; a huge number of doubles are larger than the largest possible decimal, and therefore an implicit conversion would either have to throw or silently lose perhaps an enormous quantity of magnitude, both of which are unacceptable. There could be an implicit conversion from decimal to double because that would only lose precision, not magnitude."

Alex Siepman
  • 2,499
  • 23
  • 31
0

Because although decimal's precision is higher, the range of doubles is bigger.

doubles go from -1.79769313486232E+308 to 1.79769313486232E+308. Meanwhile, decimals go from -79228162514264337593543950335to79228162514264337593543950335. There is a HUGE difference. However, you don't normally want to convert from decimal to double, as when you use decimal you normally don't want to lose precision (as they're normally used for banking, which needs to be exact).

It'sNotALie.
  • 22,289
  • 12
  • 68
  • 103