2

I am surprised to see this python behavior, but couldn't understand why? I am not able to search 0.3 in a python list.

>> import numpy as np
>> Lambdas = np.arange(0.0, 1.05, 0.05)
>> print(Lambdas)
[0.   0.05 0.1  0.15 0.2  0.25 0.3  0.35 0.4  0.45 0.5  0.55 0.6  0.65
 0.7  0.75 0.8  0.85 0.9  0.95 1.  ]
>> print(0.3 in Lambdas)
False
>> print(0.30 in Lambdas)
False
>> print(0.1 in Lambdas)
True
>> print(0.4 in Lambdas)
True
>> print(1 in Lambdas)
True
>> print(1.0 in Lambdas)
True
>> print(0.1 in Lambdas)
True
>>
  • Because the value in the `Lambdas` is not exactly `0.3`, but instead something like `0.300000000004`, it's just how floating point numbers work. – ruohola Jun 01 '19 at 21:53

2 Answers2

2

According to http://0.30000000000000004.com/

Your language isn't broken, it's doing floating point math. Computers can only natively store integers, so they need some way of representing decimal numbers. This representation comes with some degree of inaccuracy. That's why, more often than not, .1 + .2 != .3.

Why does this happen? It's actually pretty simple. When you have a base 10 system (like ours), it can only express fractions that use a prime factor of the base. The prime factors of 10 are 2 and 5. So 1/2, 1/4, 1/5, 1/8, and 1/10 can all be expressed cleanly because the denominators all use prime factors of 10. In contrast, 1/3, 1/6, and 1/7 are all repeating decimals because their denominators use a prime factor of 3 or 7. In binary (or base 2), the only prime factor is 2. So you can only express fractions cleanly which only contain 2 as a prime factor. In binary, 1/2, 1/4, 1/8 would all be expressed cleanly as decimals. While, 1/5 or 1/10 would be repeating decimals. So 0.1 and 0.2 (1/10 and 1/5) while clean decimals in a base 10 system, are repeating decimals in the base 2 system the computer is operating in. When you do math on these repeating decimals, you end up with leftovers which carry over when you convert the computer's base 2 (binary) number into a more human readable base 10 number.

Mehdi
  • 1,260
  • 2
  • 16
  • 36
1

As per Mehdi's comprehensive answer and ruohola's comment, the floating point value stored in the array is likely not to be exactly 0.3

You can try using numpy.allclose(), setting the tolerance for the minimum difference with the atol or rtol arguments - docs here

Dagorodir
  • 104
  • 1
  • 10