I suspect Rust is producing just enough digits to uniquely distinguish the floating-point values from the neighboring values representable in the type.
let a: f64 = 0.1 + 0.2;
0.1
and 0.2
are converted to the f64
values 0.1000000000000000055511151231257827021181583404541015625 and
0.200000000000000011102230246251565404236316680908203125. Adding these produces the f64
value 0.3000000000000000444089209850062616169452667236328125 (that includes rounding the real-number-arithmetic result to the nearest value representable in f64
).
The neighboring f64
values are 0.299999999999999988897769753748434595763683319091796875 and 0.300000000000000099920072216264088638126850128173828125. Observe that converting “0.3” to f64
would yield 0.299999999999999988897769753748434595763683319091796875, because this value is closer to 0.3 than 0.3000000000000000444089209850062616169452667236328125 is. Thus, when formatting 0.3000000000000000444089209850062616169452667236328125, we must produce “0.30000000000000004” to distinguish it.
let b: f32 = 0.1 + 0.2;
Here the sum is rounded to the f32
value 0.300000011920928955078125.
The neighboring f32
values are 0.2999999821186065673828125 and 0.3000000417232513427734375. Observe that converting “0.3” to f32
yields 0.300000011920928955078125 (the value we are printing), not 0.2999999821186065673828125, because 0.3 is closer to the former than to the latter. Thus, producing “0.3” suffices to distinguish 0.300000011920928955078125.
let c: f32 = 0.30000000000000004;
let d: f32 = 0.300000012;
These set c
and d
to the same f32
value as b
, so they get the same output.