2

I know that this may be a common question, but I have never found an answer (and maybe it has to do with not knowing how to search google correctly, so if somebody can point me to a reference, I will remove this question).

Why do doubles work how they do in terms of representing the right side of the decimal with an inverse power of 2, and the left side of the decimal with a power of 2? I know that it allows very large numbers to be represented, but are there any other advantages? The .NET framework has the decimal data structure available, which seems much more logical to use becuase it is how we represent numbers in human notation.

Really, my question is why doubles were created the way they were instead of initially creating something like decimal instead (which seems to be far less common).

Josh Lee
  • 171,072
  • 38
  • 269
  • 275
Dan Drews
  • 1,966
  • 17
  • 38
  • 3
    Duplicated question. There are a lot of questions here on [SO], not to mention the whole internet about this ([Most importantly: What every computer scientist should know about floating point arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)) . Also, the format is not fit for this site... But to provide an answer: our computers are binary machines, engineered to process binary data efficiently, that's why we have binary floating point structures. – ppeterka Sep 03 '13 at 15:43
  • 2
    http://stackoverflow.com/questions/2545567/in-net-how-do-i-choose-between-a-decimal-and-a-double http://stackoverflow.com/questions/803225/when-should-i-use-double-instead-of-decimal http://stackoverflow.com/questions/6535343/c-how-is-double-number-e-g-123-45-stored-in-a-float-variable-or-double-vari – Josh Lee Sep 03 '13 at 15:44
  • 2
    The fact that something works as "humans do it" does not mean it is logical at all. Look how much space we waste by using only 10 different characters to note numbers instead of using full alphabet! Usually more logical is what engineers do, not humans. xD – luk32 Sep 03 '13 at 15:46

2 Answers2

3

Your confusion seems to be unfounded. The right side of the decimal point is always represented in an inverse power of the base and the left side is always represented as a power of the base. This is true for base 10 and base 2 as well. Binary floating point numbers store an exponent that controls where the decimal point is on the mantissa.

As for why they exist: binary floating point notation has two convenient properties:

  1. It is relatively fast, because it uses binary arithmetic
  2. It can represent either very large or very small numbers with certain accuracy.

Those properties make them pretty good for e.g. physical calculations, because a small error in the last place doesn't matter much, but make them unusable for monetary applications (where you want decimal, despite it being much slower for computation).

Joey
  • 344,408
  • 85
  • 689
  • 683
  • If one scales `double` values such that whole quantities of the smallest currency unit are represented as whole-number `double` values, and one rounds them to the nearest unit at suitable times, they'll be just as accurate as `Decimal` values treated similarly. Further, if performs a non-exist `Decimal` division without rounding the result, `Decimal` will behave like a floating-point type and cease to honor the rule that that (a+b)+c should equal a+(b+c). The fact that one can represent $1.23 as `1.23m` rather than `123.0` is convenient, but financial maths really should use fixed-point types. – supercat Sep 03 '13 at 16:51
0

The FP format packs that maximum amount of precision into one-"word" or two-"word" objects while also adding an exponent so that scientific calculations1. involving large or small values can be conducted with equal precision. Because the objects fit in words, they can fit into registers, and they are supported in CPU HW and on GPU units, so they are really really fast.2.

The decimal formats are slower, larger, they are almost never supported by hardware, but they also don't need elaborate quadratic calculations, so that doesn't matter. We can count beans in software easily. The one advantage decimal strings have is that the numbers we write in real life (0.10, 0.11, 0.12, ...) can be exactly represented, and that really helps for accounting. (Strangely, because of our use of base 10 IRL, almost all of the numbers we write in commerce actually cannot be represented in base 2 if they have a fraction.)

Either format can be used for the opposite application with enough kludges and careful programming, but there wouldn't be much point in it.

1. It turns out that even tho the precision is limited, no physical constant is known to even nearly the precision of the double data type. So, they really are exactly what is needed for these types of calculations.

2. Fast beyond belief these days. Every GPU would rank as the world's fastest supercomputer CPU if you could just take it back in time a few years.

DigitalRoss
  • 143,651
  • 25
  • 248
  • 329