3

I do not understand the terms Binary Floating Point and Decimal Floating point. I know that Floats (32 Bit) and Doubles (64 bit) are binary floating points and Decimals (128 Bit) are Decimal Floating Points. I understand that Decimals are base 10 and binaries are base 2. However, I do not understand why decimals are base 10 as surely everything is base 2 in the end? Why are decimals base 10, rather than base 2 like floating points and doubles?

I have looked at many websites today e.g. this: Difference between decimal, float and double in .NET?

Community
  • 1
  • 1
w0051977
  • 15,099
  • 32
  • 152
  • 329

2 Answers2

9

Short answer: You can represent "simple" decimal numbers such as 0.1 exactly in a decimal representation, but only approximately in binary.

Typically the reason people use "decimal" types is for representing money, which usually comes in both large values and small fractions (100ths). A decimal type is best for keeping track of these types of numbers, and does not carry the risk of roundoff error producing results like 10.09999999999999.

Greg Hewgill
  • 951,095
  • 183
  • 1,149
  • 1,285
7

A floating-point number is made of a sign (+ or -), a significand and an exponent. The significand is made of a number of base-B digits. The interpretation of the exponent e is that the number should be multiplied by Be. Choosing B as 2 or 10 does not make a difference for small integers (0, 1, 2, … can be represented exactly both in binary floating-point and in decimal floating-point) but the fractional numbers and the very large numbers that can be represented exactly are not the same in binary and decimal.

Examples: binary floating-point can represent 2-100 and 2100 exactly, but decimal128 cannot. Decimal floating-point can represent 0.1 exactly, but binary floating-point cannot (at any precision).

You are right that everything is binary in the end. A number of schemes are used to represent decimal floating-point as bits. The significand is the tricky part to encode. The IEEE standardisation committee devised two schemes, one that is convenient for software and one that is convenient for hardware decoding. I do not know what .NET uses. It probably uses something similar to the “Binary integer significand field” technique from the IEEE 754 floating-point standard, where the allocated bits are used to encode, in binary, a number between 0 and Bn-1, since this is what works best in software.

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
  • FWIW, .NET uses a 96 bit binary significand, a 5 bit decimal scale factor (they don't call it exponent) with valid values of 0..28 (28 stands for 10^-28) and a sign bit. This allows people to have a literal like `1.000m` and it will be stored as a `1000` significand with scale factor `3` (i.e. `10^-3`), i.e. preserving the decimals given. – Rudy Velthuis Jul 13 '14 at 23:09