-2

I am reading from a txt file and populating a core data entity.

At some point I have read the value form the TXT file and the value is @"0.9".

Now I assign it to a CGFloat.

CGFloat value = (CGFloat)[stringValue floatValue];

debugger shows value as 0.89999997615814208 !!!!!!?????

why? bug? Even if it things [stringValue floatValue] is a double, casting it to CGFloat should not produce that abnormality.

Duck
  • 34,902
  • 47
  • 248
  • 470

3 Answers3

3

The binary floating point representation used for float can't store the value exactly. So it uses the closest representable value.

It's similar to decimal numbers: It's impossible to represent one third in decimal (instead we use an approximate representation like 0.3333333).

Nikolai Ruhe
  • 81,520
  • 17
  • 180
  • 200
2

Because to store a float in binary you can only approximate it by summing up fractions like 1/2, 1/4, 1/8 etc. For 0.9 (any many other values) there is no exact representation that can be constructed from summing fractions like this. Whereas if the value was say, 0.25 you could represent that exactly as 1/4.

mclaassen
  • 5,018
  • 4
  • 30
  • 52
1

Floating point imprecision, check out http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Basically has to do with how floating points work, they don't just store a number, they store a math problem. Broken down by a base number, and a precision. It then must combine the two numbers via math operation to retrieve the actual value, which doesn't always turn out to be exactly what you assigned to it.

Lochemage
  • 3,974
  • 11
  • 11