OCaml, for the type that it calls float
, uses the double
type of the underlying C/Unix platform, which is usually defined by that platform as IEEE 754's binary64 format.
In OCaml, the conversion to decimal is done in the old-fashioned way, with a fixed number of digits (camlspotter has already dug up the format, which is %.12g
, with the same meaning in OCaml that this format has in C).
Among modern languages (Java, Javascript, Ruby), the fashion is to convert to decimal by emitting exactly as many digits required for the decimal representation to convert back to the original floating-point number if converted back in the other direction. So in Java 0.21
is printed for and only for the double
nearest to 0.21, which is not the rational 21/100 as this number is not exactly representable as a binary floating-point number.
One method is not better than the other. They both have surprising side-effects for the unwarned developer. In particular, the Java conversion method has lead to many “Why does the value of my float
change when I convert it to double
?” questions on StackOverflow (Answer: it doesn't, but (double)0.1f
is printed with many additional digits after 0.100000
because the type double
contains more values than float
).
Anyway, both OCaml and Java compute the same floating-point number for 0.2 + 0.01
, because they both closely follow IEEE 754. They just print them differently. OCaml prints a fixed number of digits that does not go far enough to show that the number is neither 21/100 nor the double-precision floating-point representation closest to 21/100. Java prints enough digits to show that the number is not the closest to 21/100.