11

As written in the official docs the 128 bits of System.Decimal are filled like this:

The return value is a four-element array of 32-bit signed integers.

The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.

The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:

Bits 0 to 15, the lower word, are unused and must be zero.

Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.

Bits 24 to 30 are unused and must be zero.

Bit 31 contains the sign: 0 mean positive, and 1 means negative.

With that in mind one can see that some bits are "wasted" or unused.

Why not for example 120 bits of integer, 7 bits of exponent and 1 bit of sign.

Probably there is a good reason for a decimal being the way it is. This question would like to know the reasoning behind that decision.

Tom
  • 503
  • 2
  • 13
  • In addition to the efficiencies Olivier shared, aligning on 16-bit boundaries can help with various instruction optimization: https://software.intel.com/sites/default/files/managed/9e/bc/64-ia-32-architectures-optimization-manual.pdf 32-bit boundaries have similar benefits: https://stackoverflow.com/questions/1237963/alignment-along-4-byte-boundaries So keeping this on the 32bit boundary means that these optimizations are not ruled out for Decimal types. – Jamie F Jul 14 '20 at 16:40
  • 4
    For what it's worth, the decimal type seems to predate .net. The .net framework CLR delegates the computations to the oleaut32 lib, and I could find traces of the DECIMAL type as far back as Windows 95 – Kevin Gosse Jul 14 '20 at 16:50

2 Answers2

3

Based on Kevin Gosse's comment

For what it's worth, the decimal type seems to predate .net. The .net framework CLR delegates the computations to the oleaut32 lib, and I could find traces of the DECIMAL type as far back as Windows 95

I searched further and found a likely user of the DECIMAL code in oleauth32 Windows 95.

The old Visual Basic (non .NET based) and VBA have a sort-of-dynamic type called 'Variant'. In there (and only in there) you could save something nearly identical to our current System.Decimal.

Variant is always 128 bits with the first 16 bits reserved for an enum value of which data type is inside the Variant.

The separation of the remaining 112 bits could be based on common CPU architectures in the early 90'ies or ease of use for the Windows programmer. It sounds sensible to not pack exponent and sign in one byte just to have one more byte available for the integer.

When .NET was built the existing (low level) code for this type and it's operations was reused for System.Decimal.

Nothing of this is 100% verified and I would have liked the answer to contain more historical evidence but that's what I could puzzle together.

Tom
  • 503
  • 2
  • 13
1

Here is the C# source of Decimal. Note the FCallAddSub style methods. These calls out to (unavailable) fast C++ implementations of these methods.

I suspect the implementation is like this because it means that operations on the 'numbers' in the first 96 bits can be simple and fast, as CPUs operate on 32-bit words. If 120 bits were used, CPU operations would be slower and trickier and require a lot of bitmasks to get the interesting extra 24 bits, which would then be difficult to work with. Additionally, this would then 'pollute' the highest 32-bit flags, and make certain optimizations impossible.

If you look at the code, you can see that this simple bit layout is useful everywhere. It is no doubt especially useful in the underlying C++ (and probably assembler).

Jason Crease
  • 1,896
  • 17
  • 17
  • [Implementation of FCallAddSub](https://stackoverflow.com/questions/27813817/implementation-of-fcalladdsub) –  Jul 14 '20 at 20:07