2

This is the Numeric Types section in C# 9.0 in a Nutshell book that shows numeric types in c#

enter image description here

I want to know why decimal in comparison with double uses more space to store narrower number range than double in C#?

Palle Due
  • 5,929
  • 4
  • 17
  • 32
Edalat Feizi
  • 1,371
  • 20
  • 32
  • 1
    Because decimal is precise and double [is not](https://stackoverflow.com/q/588004/11683). – GSerg Apr 12 '22 at 07:48
  • 2
    @GSerg to be more precise: `Decimal` is _more_ precise. (much more, and with decimal precision instead of binary) – Franz Gleichmann Apr 12 '22 at 07:49
  • 1
    To add more information: Check the [MS Docs on `Decimal`](https://learn.microsoft.com/en-us/dotnet/api/system.decimal?view=net-6.0), especially the [remarks](https://learn.microsoft.com/en-us/dotnet/api/system.decimal?view=net-6.0#remarks) section, doubly especially the sentence "The Decimal value type is appropriate for financial calculations that require large numbers of significant integral and fractional digits and no round-off errors" – MindSwipe Apr 12 '22 at 07:54
  • 1
    Intuitively most answers focus on precision (which is obviously related). However the reason why the _extent_ of the number range is different lies within the way the exponent works. `double` uses a binary exponent of 10 bits which is directly applied to the value. `decimal` uses less bits (8 bits according to the docs) to store an exponent between 0 and 28 which is then used to scale the value using factors of 10 (allowing decimal precision). But as others have pointed out the remaining bits are used to massively increase the amount of numbers that can be represented _within_ that range. – Excelcius Apr 12 '22 at 08:57

2 Answers2

1

A decimal has about 28-29 digits of precision and a double has about 15-17 digits of precision. Therefor a decimal needs 16 bytes and double needs 8 bytes.

See https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types

Wollmich
  • 1,616
  • 1
  • 18
  • 46
1

The decimal type has a higher precision with a smaller range of exponents compared to double. It's useful in situations where you need accurate results out to > 16 digits (the effective limit of precision of the double type) while being close to or above ±1.

The .NET Decimal type consists of a 96-bit unsigned integer value (the significand or mantissa), a sign bit and an 8-bit scale value (called the exponent, although it's really not) of which only 6 bits are used. The rest of the bits are unused and must be zero/unset.

The largest integer value that can be stored in 96 bits is (2^96)-1 or 79,228,162,514,264,337,593,543,950,335. This is the absolute largest value that can be stored in a decimal, with all bits set in the mantissa and both the sign and exponent set to all-zeroes. In terms of integer values we can store any number between ±(2^96)-1 accurately with no inaccuracy.

The scale value takes those integers and shifts them right by a number of decimal places. At scale = 1 the accuracy the value is divided by 10, by 100 at scale = 2 and so on. We continue this all the way up until scale=28 where all but the top possible digit (the 7 on the far left of that big number above) is the integer and the rest of the digits are decimal digits. And that's as far as scale goes. However if your value is low and you're dividing it by 10^28 then you get much closer to zero, (as close as 1e-28) but you can have no digits past the 28th decimal place.

In fact any absolute value less than 1 will lose precision. Values in the range 0.1 <= v < 1 have at most 27 digits, in the range 0.01 <= v < 0.1 there are 26 digits and so on. The more zeroes you have after the decimal point the fewer digits of precision you have left.

By comparison, double is a 64-bit IEEE 754 'binary64' floating point value composed of 52 bits of fraction, 11 bits of binary exponent (powers of 2 from 2^-1022 through 2^1022, roughly 10^-323 through 10^308) and a sign bit. Valid positive (and non-zero) values range from 5e-324 (double.Epsilon) to 1.7976931348623157e+308 (double.MaxValue)... but you won't ever get more than about 16 decimal digits worth of accuracy in your calculations.

There are a few cases where decimal is preferred over double due mostly to the precision, but in almost all normal cases the double is preferred for its greater absolute range and much greater speed. Depending on your use case you might even prefer float over double if speed is more important than precision.

Corey
  • 15,524
  • 2
  • 35
  • 68
  • Aren't there also the special use cases where Decimal is not used just because it's more precise in a general sense but because it's more precise for base-10 numbers and the system uses base-10 rounding internally? E.g. banking? – Joooeey May 16 '23 at 09:14
  • @Joooeey Decimal floating point is useful for more than just money, but accounting systems are much more finicky about precision and rounding issues from `float` and `double` formats than most. Engineers might like `decimal` in some cases, but there's probably more accounting code that uses it than anything else. – Corey May 16 '23 at 23:26