Float is 32 bits and is single precision (floating point format). Double is 64 bits and is double precision (floating point format). Decimal is 128 bits, but is it quadruple precision (floating point format)?
-
https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types – Sep 19 '19 at 06:55
-
2`Decimal` is a special type, it's not a *floating point* value – Dmitry Bychenko Sep 19 '19 at 07:09
-
3@DmitryBychenko - unless I'm misremembering, it's floating point, but *decimal* floating point, not the more usually encountered binary floating point. – Damien_The_Unbeliever Sep 19 '19 at 07:17
-
1May be a better definition of `Decimal` is an *integer* `value` (e.g. `123456`) with *decimal* `scale` (e.g. `2`) which, finally, represents `1234.56` – Dmitry Bychenko Sep 19 '19 at 07:20
-
2I find this question unclear. The terms "single precision" and "double precision" are applied specifically to those IEEE types, in a specific way. The word "precision" doesn't really mean the same thing in other contexts...it's just a way of comparing those particular types of floating point representations. ... – Peter Duniho Sep 19 '19 at 07:24
-
2... The "precision" of a `decimal` number is always 96 binary integer digits. Since `double` has a 52-bit mantissa, in some sense, the answer is "no, `decimal` is not quadruple precision". In another sense though, even stating that answer gives too much credit to the idea that you could compare/contrast those types in that way in the first place. – Peter Duniho Sep 19 '19 at 07:24
-
Maybe what you're looking for is the discussion found here: https://stackoverflow.com/questions/33490408/mathematically-determine-the-precision-and-scale-of-a-decimal-value ? Unfortunately, I can't really tell what you're looking for. Hence "unclear" – Peter Duniho Sep 19 '19 at 07:25
-
This feels like a XY Problem - https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem . **Why** are you asking? – mjwills Sep 19 '19 at 08:14
1 Answers
No.
The first important point is that single, double, quadruple etc. denote binary floating point numbers. decimal
is a decimal floating point number, so it doesn't fit the IEEE-754 category.
So, what does the standard say?
For a binary128 ("quadruple precision"), you have a 113 bit significand, and a 15 bit exponent. This works out to 34 decimal digits, with an exponent coming between -16 382 and +16 383. Note that the exponent is on top of two (e.g. the maximum is 2 ^ 16 383), hence binary.
decimal128, which is the decimal floating point equivalent has a 110 bit significand, and 12 bits for exponent. However, the exponent is on top of ten, so the range of decimal128 is far greater than binary128 (~1E4932 for binary128 and ~1E6145 for decimal128).
How does decimal
compare with the standard?
28 significant digits, ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335.
So no, it's not quadruple precision because it's decimal floating point, but even if you compare with the same level of decimal floating point in the IEEE-754 standard, the range doesn't even come close, though the number of significant digits is close enough. decimal
was designed for monetary operations, so the range was quite sufficient (and will be sufficient even given current rates of monetary inflation for quite a while).

- 62,244
- 7
- 97
- 116