0

I tried this code in Python and found it surprising, can anyone explain this

x=0.0
for i in xrange(0,10):
    x=x+.1

print x #prints 1.0
print x==1.0#false

2 Answers2

0

Because of floating-point errors, the true value of x is 0.9999999999999999, not 1.0:

>>> x=0.0
>>> for i in xrange(0,10):
...     x=x+.1
...
>>> x
0.9999999999999999
>>>

Even though it is quite close, 0.9999999999999999 still does not equal 1.0. That is why Python is returning False for x==1.0.


The reason that print x is showing 1.0 is simply that print is rounding x to 1.0:

>>> x = 0.9999999999999999
>>> print x
1.0
>>> x
0.9999999999999999
>>>

The true value of x still equals 0.9999999999999999 though.


As a final demonstration, let's remove the for-loop and add the numbers manually:

>>> x = 0.0
>>> x += .1
>>> x
0.1
>>> x += .1
>>> x
0.2
>>> x += .1
>>> x
0.30000000000000004
>>> x += .1
>>> x
0.4
>>> x += .1
>>> x
0.5
>>> x += .1
>>> x
0.6
>>> x += .1
>>> x
0.7
>>> x += .1
>>> x
0.7999999999999999
>>> x += .1
>>> x
0.8999999999999999
>>> x += .1
>>> x
0.9999999999999999
>>>

As you can see, repeatedly adding .1 to 0.0 generates a small but still noticeable floating-point error.

0
>>> x
0.9999999999999999
>>> print x
1.0

It's floating point precision.

gberger
  • 2,813
  • 3
  • 28
  • 50