Here is my code,
s = 0
for i in range(10):
s += 0.1
print (s)
The output is 0.9999999999999999, Why not output 0.9?
Here is my code,
s = 0
for i in range(10):
s += 0.1
print (s)
The output is 0.9999999999999999, Why not output 0.9?
Consider this fraction.
1/2
What will be it's output? 0.5
simple right.
Now for 1/3
what will be the output?
0.3
seems okay because
3 x 0.3 = 0.9
More accurately,
3 x 0.33 = 0.99
or even more
3 x 0.333 = 0.999
Hell if you do the same for say 10 decimal points you still get,
3 x 0.33333333333 = 0.99999999999
As you see we are getting more and more nearer to the value 1
but never
are we exactly equal. So that means we are getting more and more accurate with such high precisions.
However you won't normally require such precisions for your simple programs. See what Python docs say about it,
0.999999999 , That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead
But
It’s important to realize that this is, in a real sense, an illusion: the value in the machine is not exactly 1/10, you’re simply rounding the display of the true machine value. This fact becomes apparent as soon as you try to do arithmetic with these values
What this means basically is just because you see
>>>0.1
0.1
in the interpreter doesn't mean it is just 0.1
the true value in the machine would be something like,
0.1000000000000000055511151231257827021181583404541015625
Try this in your interpreter,
>>> 0.1
0.1
>>> 0.2
0.2
>>> 0.1+0.2
0.30000000000000004
>>>
So this is what is happening to all those 0.1+0.2+0.3...
to finally get your
0.999999999
For more reference, see this YouTube video.
Your code iterates from 0
to 9
, in total 10 iterations.
This gives 10 • 0.1 = 1
, but because of python float accuracy adding 0.1
ten times results in 0.99999
.