This is the Numeric Types section in C# 9.0 in a Nutshell book that shows numeric types in c#
I want to know why decimal
in comparison with double
uses more space to store narrower number range than double in C#?
This is the Numeric Types section in C# 9.0 in a Nutshell book that shows numeric types in c#
I want to know why decimal
in comparison with double
uses more space to store narrower number range than double in C#?
A decimal
has about 28-29 digits of precision and a double
has about 15-17 digits of precision. Therefor a decimal
needs 16 bytes and double
needs 8 bytes.
The decimal
type has a higher precision with a smaller range of exponents compared to double
. It's useful in situations where you need accurate results out to > 16 digits (the effective limit of precision of the double
type) while being close to or above ±1.
The .NET Decimal
type consists of a 96-bit unsigned integer value (the significand or mantissa), a sign bit and an 8-bit scale value (called the exponent, although it's really not) of which only 6 bits are used. The rest of the bits are unused and must be zero/unset.
The largest integer value that can be stored in 96 bits is (2^96)-1
or 79,228,162,514,264,337,593,543,950,335
. This is the absolute largest value that can be stored in a decimal
, with all bits set in the mantissa and both the sign and exponent set to all-zeroes. In terms of integer values we can store any number between ±(2^96)-1
accurately with no inaccuracy.
The scale value takes those integers and shifts them right by a number of decimal places. At scale = 1 the accuracy the value is divided by 10, by 100 at scale = 2 and so on. We continue this all the way up until scale=28 where all but the top possible digit (the 7
on the far left of that big number above) is the integer and the rest of the digits are decimal digits. And that's as far as scale goes. However if your value is low and you're dividing it by 10^28 then you get much closer to zero, (as close as 1e-28
) but you can have no digits past the 28th decimal place.
In fact any absolute value less than 1 will lose precision. Values in the range 0.1 <= v < 1
have at most 27 digits, in the range 0.01 <= v < 0.1
there are 26 digits and so on. The more zeroes you have after the decimal point the fewer digits of precision you have left.
By comparison, double
is a 64-bit IEEE 754 'binary64' floating point value composed of 52 bits of fraction, 11 bits of binary exponent (powers of 2 from 2^-1022
through 2^1022
, roughly 10^-323
through 10^308
) and a sign bit. Valid positive (and non-zero) values range from 5e-324
(double.Epsilon
) to 1.7976931348623157e+308
(double.MaxValue
)... but you won't ever get more than about 16 decimal digits worth of accuracy in your calculations.
There are a few cases where decimal
is preferred over double
due mostly to the precision, but in almost all normal cases the double
is preferred for its greater absolute range and much greater speed. Depending on your use case you might even prefer float
over double
if speed is more important than precision.