35

The code

public class MyClass
{
    public const Decimal CONSTANT = 0.50; // ERROR CS0664
}

produces this error:

error CS0664: Literal of type double cannot be implicitly converted to type 'decimal'; use an 'M' suffix to create a literal of this type

as documented. But this works:

public class MyClass
{
    public const Decimal CONSTANT = 50; // OK
}

Why did they forbid the first one? It seems weird to me.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
onof
  • 17,167
  • 7
  • 49
  • 85
  • The type of a real literal is determined by its suffix as follows: The literal without suffix or with the d or D suffix is of type double The literal with the f or F suffix is of type float The literal with the m or M suffix is of type decimal https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types – Yves Rochon Mar 18 '20 at 21:42

6 Answers6

48

The type of a literal without the m suffix is double - it's as simple as that. You can't initialize a float that way either:

float x = 10.0; // Fail

The type of the literal should be made clear from the literal itself, and the type of variable it's assigned to should be assignable to from the type of that literal. So your second example works because there's an implicit conversion from int (the type of the literal) to decimal. There's no implicit conversion from double to decimal (as it can lose information).

Personally I'd have preferred it if there'd been no default or if the default had been decimal, but that's a different matter...

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • 4
    +1 for preferring no default. Doubles and decimals are largely useless to game developers, so declaring a default precision for literals, a la GLSL in OpenGL ES 2.0, could eliminate all the annoying F's everywhere. –  May 13 '12 at 20:48
  • 1
    @Jessy: If .net hadn't followed Java's lead in requiring silly typecasts from `double`-to-`float` but not vice versa (despite the fact that conversions from more-specific to less-specific types are supposed to be widening) would there be any problems with having numeric literals default to `double`? It shouldn't take much for a compiler to recognize when a literal is used for no purpose other than to assign a `float`, and convert it at compile time. – supercat Jul 20 '12 at 19:58
  • 1
    If this cat reaches 1 million points, will he receive a crown and a trident, be knighted, receive a lifetime supply of biscuits, or...??? – B. Clay Shannon-B. Crow Raven Jul 01 '14 at 20:22
  • While it is true that the conversion from `double` to `decimal` can lose information, in many cases it loses only precision because the value is rounded to 15 significant figures. Although in some cases, like `3.14159e-26`, more precision is lost. However, the conversion can also easily overflow (if the operand is `NaN` or numerically exceeds approx. `8e28`) which is another good reason for not making the conversion implicit (analogous to why narrowing integer conversions are not implicit). – Jeppe Stig Nielsen Jun 16 '16 at 11:58
11

The first example is a double literal. The second example is an integer literal.

I guess it's not possible to convert double to decimal without possible loss of precision, but it is ok with an integer. So they allow implicit conversion with an integer.

James Johnston
  • 9,264
  • 9
  • 48
  • 76
5

Every literal is treated as a type. If you do not chose the 'M' suffix it is treated as a double. That you cannot implicitly convert a double to a decimal is quite understandable as it loses precision.

Oskar Kjellin
  • 21,280
  • 10
  • 54
  • 93
5

Your answer is a bit lower in the same link you provided, and it is also here. In Conversions:

The integral types are implicitly converted to decimal and the result evaluates to decimal. Therefore you can initialize a decimal variable using an integer literal, without the suffix.

So, the reason is because of the implicit conversion between int and decimal. And since 0.50 is treated as double, and there is not implicit conversion between double and decimal, you get your error.

For more details:

http://msdn.microsoft.com/en-us/library/y5b434w4(v=vs.80).aspx

http://msdn.microsoft.com/en-us/library/yht2cx7b.aspx

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
AJC
  • 1,853
  • 1
  • 17
  • 28
  • The links are (effectively) broken: *"Visual Studio 2005 Retired documentation"* and a redirect to a generic page, respectively. – Peter Mortensen Aug 02 '23 at 10:48
4

It’s a design choice that the creators of C# made.

Likely it stems from double can lose precision and they didn't want you to store that loss. int doesn’t have that problem.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Daniel A. White
  • 187,200
  • 47
  • 362
  • 445
3

From Floating-point numeric types:

There isn't any implicit conversion between floating-point types and the decimal type; therefore, a cast must be used to convert between these two types.

They do this, because double has such a huge range ±5.0 × 10−324 to ±1.7 × 10308 whereas int is only -2,147,483,648 to 2,147,483,647. A decimal's range is (-7.9 x 1028 to 7.9 x 1028) / (100 to 28), so it can hold an int but not a double.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Paul
  • 948
  • 8
  • 17