I am trying to understand why swift is messing with the precision of a float when doing addition.
Here is a simple example of starting with 127.1
and adding 0.1
to it ..
var value: Float = 127.1
for number in 0...5 {
print(value)
value += 0.1
}
and this is the output I get ..
127.1
127.2
127.299995
127.399994
127.49999
127.59999
... when what I was expecting is this:
127.1
127.2
127.3
127.4
127.5
127.6
Can someone explain why this happens and what if anything I can do to maintain the single decimal precision I need?
NOTE: I know I can format it as a string with with the precision I need, but that doesn't help me when I need to perform math and I can't rely on the precision of the number.
UPDATE: I did find an interesting article that talks about how computers deal with floating point math and is somewhat of an answer to "why" it happens but I'm curious what others think is the best way to get the result I need.
UPDATE: The best solution I've found is to convert to an Integer to do the math and then convert back to a Float ...
var value: Float = 127.1
for number in 0...9 {
print(value)
var valueAsInt: Int = Int(value * 10)
valueAsInt += 1
value = Float(valueAsInt)/10
}