I am running this code on Python (both 2.7, 3.x):
>>> 1.1 + 2.2
3.3000000000000003
>>> 1.1 + 2.3
3.4
Could someone explain how does it work and what is happening?
I am running this code on Python (both 2.7, 3.x):
>>> 1.1 + 2.2
3.3000000000000003
>>> 1.1 + 2.3
3.4
Could someone explain how does it work and what is happening?
float in Python implementing double point precision. Unless a number has power two denominator, it cannot be represented exactly by Python, but only "approximately" - up to the 16-th digit. Thus number like: 1, 0.5, 0.25 can be represented exactly, but number like your case (3.3) can only be represented "approximately". Its all correct, up to the 16 digit, and then you get the last 3 there, which is incorrect.