0

I was converting Float => CGFloat and it gave me following result. Why It comes as "0.349999994039536" after conversion but works fine with Double => CGFloat?

let float: Float = 0.35
let cgFloat = CGFloat(float)
print(cgFloat)
// 0.349999994039536

let double: Double = 0.35
let cgFloat = CGFloat(double)
print(cgFloat)
// 0.35
Anirudha Mahale
  • 2,526
  • 3
  • 37
  • 57
  • Possibly Related: https://stackoverflow.com/questions/588004/is-floating-point-math-broken – Ahmad F May 02 '18 at 08:20
  • @Moritz: The fact that floating-point arithmetic is not exact does not alone explain why printing a `CGFloat` converted from `float` shows an inaccuracy while printing a `CGFloat` converted fro `double` does not. The complete answer involves the different precisions involved and the algorithm used for converting floating-point to decimal. Please do not promiscuously close floating-point questions as duplicates of [that question](https://stackoverflow.com/questions/588004/is-floating-point-math-broken). – Eric Postpischil May 02 '18 at 11:12

1 Answers1

2

Both converting “.35” to float and converting “.35” to double produce a value that differs from .35, because the floating-point formats use a binary base, so the exact mathematical value must be approximated using powers of two (negative powers of two in this case).

Because the float format uses fewer bits, its result is less precise and, in this case, less accurate. The float value is 0.3499999940395355224609375, and the double value is 0.34999999999999997779553950749686919152736663818359375.

I am not completely conversant with Swift, but I suspect the algorithm it is using to convert a CGFloat to decimal (with default options) is something like:

  • Produce a fixed number of decimal digits, with correct rounding from the actual value of the CGFloat to the number of digits, and then suppress any trailing zeroes. For example, if the exact mathematical value is 0.34999999999999997…, and the formatting uses 15 significant digits, the intermediate result is “0.350000000000000”, and then this is shorted to “0.35”.

The way this operates with float and double is:

  • When converted to double, .35 becomes 0.34999999999999997779553950749686919152736663818359375. When printed using the above methods, the result is “0.35”.
  • When converted to float, .35 becomes 0.3499999940395355224609375. When printed using the above method, the result is “0.349999994039536”.

Thus, both the float and double values differ from .35, but the formatting for printing does not use enough digits to show the deviation for the double value, while it does use enough digits to show the deviation for the float value.

Eric Postpischil
  • 195,579
  • 13
  • 168
  • 312