Is it necessary to add M
for zero values for assignment and comparisons of decimal
variable?
decimal val;
...
if (val == 0M)
{
}
or
if (val == 0)
{
}
I guess the constant will be converted at compile time and the result will be identical.
Is it necessary to add M
for zero values for assignment and comparisons of decimal
variable?
decimal val;
...
if (val == 0M)
{
}
or
if (val == 0)
{
}
I guess the constant will be converted at compile time and the result will be identical.
It is not necessary. Integer types are casted implicitly to decimal
. You have to add the M
suffix if the literal represents a floating point number. Floating point literals without a type suffix are double
and those require an explicit cast to decimal
.
decimal d = 1; // works
decimal d2 = 1.0 // does not work
decimal d3 = 1.0M // works
The literal 0
here is obviously a special case of the integer literal.