What is the difference between
.3+.3+.3+.1 == 1
which returns false, while
.3+.3+.1+.3 == 1
returns true? This applies to Python as well.
What is the difference between
.3+.3+.3+.1 == 1
which returns false, while
.3+.3+.1+.3 == 1
returns true? This applies to Python as well.
This is due to floating point arithmetic. You can use the ieee754 function to see the floating point representation.
>> ieee754(.3+.3+.3+.1)
ans =
0011111111101111111111111111111111111111111111111111111111111111
>> ieee754(.3+.3+.1+.3)
ans =
0011111111110000000000000000000000000000000000000000000000000000
This is a general consequence of finite precision arithmetic in general. The set of possible floating point numbers representable at a given precision forms only a subset of the set of all real numbers. As such, only those numbers that are precisely equal to the finite amount of available floating point representations on a computer. As such unless one of your numbers is exactly the same as its finite precision representation, the actual number represented as bytes in memory will actually only be an approximation. You will then get error propagation when performing arithmetic with these numbers. Do some research into numerical analysis for a much fuller and more precise definition of this kind of thing.