What you’re running into here is essentially the problem behind floating point alrithmetic. See this question for further information about what’s going on and why it happens.
To sum it up, just look at the results from your drange(0.1, 0.9, 0.1)
:
>>> list(drange(0.1, 0.9, 0.1))
[0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999]
As you can see, you don’t get exact results there. So when you sum them up, you won’t get an exact 1
.
Instead, when comparing floats with rounded numbers, you should always allow for some kind of precision loss. One way to do that is to take the difference and see if it’s below some threshold (in this case, I chose 0.00001
):
if abs((a + b + c) - 1) < 0.00001:
print('The sum is likely 1')
So in your case, your code could look like this:
for a in drange(0.1, 0.9, 0.1):
for b in drange(0.1, 0.9, 0.1):
for c in drange(0.1, 0.9, 0.1):
if abs((a + b + c) - 1) < 0.00001 and a > b > c:
print a
print b
print c
And that will safely produce the expected output.