5
decimal l = 50.0M;

I have seen other answers suggesting that the M is to explicitly state the type as decimal - What does the M stand for in C# Decimal literal notation?

However when the type, of variable, is exclusively stated, why should there be a suffix? I can see the relevance of the suffix when the type, of variable, isn't specified, like:

var l = 50.0M
Social Developer
  • 405
  • 5
  • 16

3 Answers3

6

when the type, of variable, is exclusively stated, why should there be a suffix?

You need a cast only in situations when the value that you are assigning has a decimal point. In your case, 50.0 represents a literal of type double. You can avoid suffix by adding a cast, like this

decimal l = (decimal)50.0; // Do not do this!

but this may result in conversion errors:

decimal d = (decimal)1.23456789123456789;
Console.WriteLine(d); // Prints 1.23456789123457
decimal e = 1.23456789123456789M;
Console.WriteLine(e); // Prints 1.23456789123456789

Note that the following will compile without a suffix or a cast, because int to decimal conversion never loses precision:

decimal l = 50;

Another place where you may want M suffix is in expressions that manipulate decimals:

decimal tenPercentBroken  = myDecimal * 0.1;  // Does not compile
decimal tenPercentCorrect = myDecimal * 0.1M; // Compiles fine
Sergey Kalinichenko
  • 714,442
  • 84
  • 1,110
  • 1,523
1

50.0 in C# is a literal double, so without the M suffix, you are trying to implicitly convert a double to a decimal (an implicit conversion that does not exist).

Using decimal l = 50.0M; says: assign this decimal to that decimal variable.

Patrick Hofman
  • 153,850
  • 22
  • 249
  • 325
1

I believe the reason is because the right hand side is evaluated first in assignment statements.

The value on the right hand side is assigned to the variable on the left. So the right hand side has to be evaluated first and then it looks at the left hand side and, if the types are different, it will have to do a cast.