11

I was recently attempting to answer a question that a user posted about why the decimal struct does not declare its Min/Max values as const like every other numeric primitive; rather, the Microsoft documentation states that it is static readonly.

In researching that, I dug through the Microsoft source code, and came up with an interesting discovery; the source (.NET 4.5) makes it look like a const which is in opposition to what the documentation clearly states (source and relevant struct constructor pasted below).

public const Decimal MinValue = new Decimal(-1, -1, -1, true, (byte) 0);
public const Decimal MaxValue = new Decimal(-1, -1, -1, false, (byte) 0);

public Decimal(int lo, int mid, int hi, bool isNegative, byte scale)
{
  if ((int) scale > 28)
    throw new ArgumentOutOfRangeException("scale", Environment.GetResourceString("ArgumentOutOfRange_DecimalScale"));
  this.lo = lo;
  this.mid = mid;
  this.hi = hi;
  this.flags = (int) scale << 16;
  if (!isNegative)
    return;
  this.flags |= int.MinValue;
}

The thread here continues to unravel, because I can't see how this would compile legally under the rules of C# - because while it still is technically a constant, the compiler thinks it isn't and will give you an error The expression being assigned to ... must be constant. Hence what I believe is the reason that the docs call it a static readonly.

Now, this begs a question: is this file from the Microsoft source server actually the source for decimal, or has it been doctored? Am I missing something?

Community
  • 1
  • 1
theMayer
  • 15,456
  • 7
  • 58
  • 90
  • 2
    I'm not certain of the details but know that the core .Net libraries are compiled in a non-standard way. One example of this is that circular references are allowed across assemblies (assembly 1 references a type in assembly 2 which in turn references a type in assembly 1). This would lead me to believe the compiler used for the core libraries isn't the stock one released to the public. That said, I can't find the article discussing the topic – Basic Feb 01 '14 at 19:39

1 Answers1

22

There are a few aspects of mscorlib and the like which wouldn't compile as-written, without some interesting hacks. In particular, there are some cyclic dependencies. This is another case, but I think it's reasonable to consider MaxValue and MinValue as being const as far as the C# compiler is concerned.

In particular, it's valid to use them within other const calculations:

const decimal Sum = decimal.MaxValue + decimal.MinValue;

The fields have the DecimalConstantAttribute applied to them, which is effectively a hack to get around an impedance mismatch between C# and the CLR: you can't have a constant field of type decimal in the CLR in the same way that you can have a constant field of type int or string, with an IL declaration using static literal ....

(This is also why you can't use decimal values in attribute constructors - there, the "const-ness" requirement is true IL-level constness.)

Instead, any const decimal declaration in C# code is compiled to a static initonly field with DecimalConstantAttribute applied to it specifying the appropriate data. The C# compiler uses that information to treat such a field as a constant expression elsewhere.

Basically, decimal in the CLR isn't a "known primitive" type in the way that int, float etc are. There are no decimal-specific IL instructions.

Now, in terms of the specific C# code you're referring to, I suspect there are two possibilities:

  • No, this isn't the exact source code used.
  • The C# compiler used to compile mscorlib and other core aspects of the framework may have special flags applied to allow such code, converting it directly to DecimalConstantAttribute

To a large extent you can ignore this - it won't affect you. It's a shame that MSDN documents the fields as being static readonly rather than const though, as that gives the mistaken impression that one can't use them in const expressions :(

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • This certainly seems to be along the lines of what I was thinking, but you clearly know a lot more about the "guts" of .NET. I've heard that decimal is less primitive, but what makes it so? Is it due to implementation? – theMayer Feb 01 '14 at 20:14
  • 2
    @rmayer06: It's not a primitive in terms of the CLR - for example, `typeof(decimal).IsPrimitive` returns false. The CLR doesn't have any special knowledge of `decimal` as far as I'm aware. – Jon Skeet Feb 01 '14 at 20:20
  • "It's a shame that MSDN documents the fields as being `static readonly` ": Well much of the documentation is based upon the actual declarations as can be seen by Reflector or other decompilers. Just like `String` [not being seen as `sealed` sometimes](http://stackoverflow.com/a/6936430/256431) in the Object Browser, I think this is just another [quirk](http://stackoverflow.com/a/13530321/256431). – Mark Hurd Feb 02 '14 at 11:46