>>> .1+.1+.1+.1 ==.4
True
>>> .1+.1+.1 ==.3
False
>>>
The above is an output from python interpreter. I understand the fact that
floating point arithmetic is done using base 2 and is stored as binary in the
and so the difference in calculations like above results.
Now I found that .4 = .011(0011) [The number inside () repeats infinitely this is a binary
representation of this fraction] since this cannot be stored exactly an approximate value
will be stored.
Similary 0.3 = .01(0011)
So both 0.4 and 0.3 cannot be stored exactly internally.
But then what's the reason for python to return first as True and the second as False
As both cannot be compared
_______________________________________________________________________________
I did some research and found the following :
>>> Decimal(.4)
Decimal('0.40000000000000002220446049250313080847263336181640625')
>>> Decimal(.1+.1+.1+.1)
Decimal('0.40000000000000002220446049250313080847263336181640625')
>>> Decimal(.1+.1+.1)
Decimal('0.3000000000000000444089209850062616169452667236328125')
>>> Decimal(.3)
Decimal('0.299999999999999988897769753748434595763683319091796875')
>>> Decimal(.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
This probably explains why the additions are happening the way they are
assuming that Decimal is giving the exact ouput of the number stored underneath