21

Why is decimal not a primitive type?

Console.WriteLine(typeof(decimal).IsPrimitive);

outputs false.

It is a base type, it's part of the specifications of the language, but not a primitive. What primitive type(s) do represent a decimal in the framework? An int for example has a field m_value of type int. A double has a field m_value of type double. It's not the case for decimal. It seems to be represented by a bunch of ints but I'm not sure.

Why does it look like a primitive type, behaves like a primitive type (except in a couple of cases) but is not a primitive type?

Guillaume
  • 1,782
  • 1
  • 25
  • 42

3 Answers3

23

Although not a direct answer, the documentation for IsPrimitive lists what the primitive types are:

http://msdn.microsoft.com/en-us/library/system.type.isprimitive.aspx

A similar question was asked here:

http://bytes.com/topic/c-sharp/answers/233001-typeof-decimal-isprimitive-false-bug-feature

Answer quoted from Jon Skeet:

The CLR doesn't need to have any intrinsic knowledge about the decimal type - it treats it just as another value type which happens to have overloaded operators. There are no IL instructions to operate directly on decimals, for instance.

To me, it seems as though decimal is a type that must exist for a language/runtime wanting to be CLS/CLI-compliant (and is hence termed "primitive" because it is a base type with keyword support), but the actual implementation does not require it to be truly "primitive" (as in the CLR doesn't think it is a primitive data type).

Adam Houldsworth
  • 63,413
  • 11
  • 150
  • 187
  • Yes I read this link. The first link I gave - althought for VB - lists however `decimal` as a primitive. Trying my snippet in VB returned me `false` also. Maybe there's an error in the specs for VB (or they're out of date...)! – Guillaume Nov 20 '12 at 11:15
  • 2
    @Guillaume Yes I saw that link, I think it might hinge on where the definition of "primitive" lies. In the CLR, it is not a primitive type, however it appears to be described as one if you consider just the specification (of which the CLR is an implementation). – Adam Houldsworth Nov 20 '12 at 11:17
  • I think the confusion _may_ lie in the terminology; as in, framework primitive types being one thing, and 'built-in' types of a .NET supported language being another but considered 'primitive' in a different sense in the context of that language? – Grant Thomas Nov 20 '12 at 11:22
  • @GrantThomas I agree, this is also my best guess and I hopefully convey this in the answer. – Adam Houldsworth Nov 20 '12 at 11:25
  • I quite like your second link. The answer from Nicholas Paladino also makes sense. – Guillaume Nov 20 '12 at 13:24
  • 1
    A more interesting case is `string`, which should IMHO be regarded as a primitive type since it and `Array` are the only two variable-size types, and its storage is different from that of `Array`. – supercat Nov 20 '12 at 20:13
  • Also because CLR does not consider `decimal` as a primitive type, therefore there are no IL instructions for manipulating `decimal` values - the `checked/unchecked` operators/statements/compiler switches will have no effect. – SᴇM Oct 24 '18 at 08:30
13

Decimal is a 128 bit data type, which can not be represented natively on a computer hardware. For example a 64-bit computer architecture generally has integer and addressing registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses.

Wikipedia says that

Depending on the language and its implementation, primitive data types may or may not have a one-to-one correspondence with objects in the computer's memory. However, one usually expects operations on basic primitive data types to be the fastest language constructs there are.

In case of decimal it is just a composite datatype which utilizes integers internally, so its performance is slower than of datatypes that have a direct correlation to computer memory (ints, doubles etc).

Valentin V
  • 24,971
  • 33
  • 103
  • 152
  • I didn't think about the impact this data structure on the architecture... I'll try to find more info about this. – Guillaume Nov 20 '12 at 13:34
  • 4
    Although I understand the idea behind the answer, can this be proven on types such as `double` and `long` that are 63-bits wide on a 32-bit system? My point being that the CLR may not take this definition of primitive into account. – Adam Houldsworth Nov 20 '12 at 13:41
  • 1
    So essentially decimals can also become primitive when we will have 128 bit computers after a decade. – RBT Oct 14 '16 at 02:46
5

Consider the below example ,

     int i = 5;
    float f = 1.3f;
    decimal d = 10;

If you put a debugger and verify the native instruction set,it would be

enter image description here

As you can see int,float all being a primitive type takes single instruction to perform the assignemnt operation whereas decimal,string being non primitive types takes more than one native instruction to perform this operation.

Hameed Syed
  • 3,939
  • 2
  • 21
  • 31