As written in the official docs the 128 bits of System.Decimal
are filled like this:
The return value is a four-element array of 32-bit signed integers.
The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.
The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:
Bits 0 to 15, the lower word, are unused and must be zero.
Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.
Bits 24 to 30 are unused and must be zero.
Bit 31 contains the sign: 0 mean positive, and 1 means negative.
With that in mind one can see that some bits are "wasted" or unused.
Why not for example 120 bits of integer, 7 bits of exponent and 1 bit of sign.
Probably there is a good reason for a decimal being the way it is. This question would like to know the reasoning behind that decision.