The nice thing about the type decimal
when displayed with our usual decimal notation is that it is WYSIWYG: printing enough decimal digits (and 0.3333333333333333333333333333M
certainly looks like enough), you can see the exact number the machine is working with. There is no surprise that three times that makes 0.9999999999999999999999999999M
: you can do it with pen and paper and reproduce the result(2).
In binary, it would require many more decimal digits to see the exact number being represented, and they are usually not all printed (but the situation would be just as simple it they were). It is only a coincidence that in this case the binary multiplication of 3.0
by 1.0 / 3.0
makes 1.0
. The property holds for some numbers but does not have to hold for all numbers. In fact, the result may not be 1.0, and your language may be printing fewer decimal digits than would reveal this. An exponential form 1.DD…DDEXXX with 16 digits after the dot suffices to distinguish all double-precision numbers, although it does not show the exact value of the number.
So, in summary:
- decimal is WYSIWYG, you got 0.99… because you multiplied 0.33… by 3
- the result in binary may not be 1.0, but only print as such with the default limited number of decimals for binary numbers in your language
- even if it is 1.0, that's a coincidence that might not have happened with another number instead of 3.0.
Miscellaneous notes
- If F# is like OCaml in this respect, you can print enough decimals to distinguish 1.0 from another
float
with Printf.printf "%.16e"
.
- F#'s
decimal
type is WYSIWYG but you have to remember that some numbers have 28 digits of precision and most have 29. See supercat's answer or the comments below for details.
- The hexadecimal notation has the same WYSIWYG property for binary floating-point as the decimal notation has for
decimal
.
C99, of all languages and years, has the best support for fine floating-point manipulation, and it supports hexadecimal for input and output.
An example:
#include <stdio.h>
int main(){
double d = 1 / 3.0;
printf("%a\n%a\n", d, 3*d);
}
Executing produces:
$ gcc -std=c99 t.c && ./a.out
0x1.5555555555555p-2
0x1p+0
With pen and paper, we can multiply 0x1.5555555555555p-2
by 3
. We obtain 0x3.FFFFFFFFFFFFFp-2
, or 0x1.FFFFFFFFFFFFF8p-1
after normalization. This number is not representable exactly as a binary64 floating-point number (it has too many significant digits), and the “nearest” representable number, returned by the multiplication, is 1.0
. (The rule that ties must be rounded to the nearest even number is applied. Of the two equally near alternatives 0x1.FFFFFFFFFFFFFp-1
and 1.0
, the 1.0
result is the “even” one.)