31

I don't understand the casting rules when it comes to decimal and double.

It is legal to do this

decimal dec = 10;
double doub = (double) dec;

What confuses me however is that decimal is a 16 byte datatype and double is 8 bytes so isn't casting a double to a decimal a widening conversation and should therefore be allowed implicitly; with the example above disallowed?

double doub = 3.2;
decimal dec = doub; // CS0029: Cannot implicitly convert type 'double' to 'decimal'
Olivier Jacot-Descombes
  • 104,806
  • 13
  • 138
  • 188
Maxim Gershkovich
  • 45,951
  • 44
  • 147
  • 243
  • I'd say it's rather *Why can't decimal be implicitly cast do double* – Dyppl Oct 19 '11 at 07:32
  • Ok, so with a bit more testing it appears Ive gotten a bit confused but my fundamental question stands... – Maxim Gershkovich Oct 19 '11 at 07:34
  • 4
    FYI this question was the subject of my blog in July 2013. http://ericlippert.com/2013/07/18/why-not-allow-doubledecimal-implicit-conversions/ Thanks for the great question! – Eric Lippert Jul 25 '13 at 16:26

3 Answers3

38

If you convert from double to decimal, you can lose information - the number may be completely out of range, as the range of a double is much larger than the range of a decimal.

If you convert from decimal to double, you can lose information - for example, 0.1 is exactly representable in decimal but not in double, and decimal actually uses a lot more bits for precision than double does.

Implicit conversions shouldn't lose information (the conversion from long to double might, but that's a different argument). If you're going to lose information, you should have to tell the compiler that you're aware of that, via an explicit cast.

That's why there aren't implicit conversions either way.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • 6
    You are of course correct, but I'll take this opportunity to point out that there *are*, unfortunately, a few built-in implicit conversions that lose information -- long to double, for example. None of the built-in implicit conversions lose *magnitude*, but some of them lose *precision*. We could have made the magnitude-preserving-but-precision-losing conversion from decimal to double also implicit, but chose not to. – Eric Lippert Oct 19 '11 at 15:47
  • 9
    The reasoning here is not just that you can lose *information*; it is that the conversion is *fundamentally a goofy thing to do*. A double is intended to represent something like an imprecise physical quantity, like a scientific measurement. A decimal is intended to represent an exact quantity, like a stock price or a mortgage balance. If you are converting one to the other -- say, you are converting stock prices to double in order to use them in with a statistical analysis library written to take doubles -- then you should be clear that you intend the precision-losing conversion. – Eric Lippert Oct 19 '11 at 15:50
3

Decimal is more precise, so you would lose information. That's why you can only do it explicitely. It's to protect you from losing information. See MSDN

http://msdn.microsoft.com/en-us/library/678hzkk9%28v=VS.100%29.aspx

http://msdn.microsoft.com/en-us/library/364x0z75.aspx

Pieter
  • 3,339
  • 5
  • 30
  • 63
1

You can explicitly cast in both directions: from double to decimal and from decimal to double.

You can't implicitly convert in either direction for a very good reason: the conversion may not be loss-less.

For example, the decimal number 1234567890123456789 can not be exactly represented as a double. Likewise, the double number 10^32 cannot be exactly represented as a decimal number.

To avoid losing information unintentionally, the implicit conversion is disallowed.

Jeffrey Sax
  • 10,253
  • 3
  • 29
  • 40