Both converting “.35” to float
and converting “.35” to double
produce a value that differs from .35, because the floating-point formats use a binary base, so the exact mathematical value must be approximated using powers of two (negative powers of two in this case).
Because the float
format uses fewer bits, its result is less precise and, in this case, less accurate. The float
value is 0.3499999940395355224609375, and the double
value is 0.34999999999999997779553950749686919152736663818359375.
I am not completely conversant with Swift, but I suspect the algorithm it is using to convert a CGFloat
to decimal (with default options) is something like:
- Produce a fixed number of decimal digits, with correct rounding from the actual value of the
CGFloat
to the number of digits, and then suppress any trailing zeroes. For example, if the exact mathematical value is 0.34999999999999997…, and the formatting uses 15 significant digits, the intermediate result is “0.350000000000000”, and then this is shorted to “0.35”.
The way this operates with float
and double
is:
- When converted to
double
, .35 becomes 0.34999999999999997779553950749686919152736663818359375. When printed using the above methods, the result is “0.35”.
- When converted to
float
, .35 becomes 0.3499999940395355224609375. When printed using the above method, the result is “0.349999994039536”.
Thus, both the float
and double
values differ from .35, but the formatting for printing does not use enough digits to show the deviation for the double
value, while it does use enough digits to show the deviation for the float
value.