0

I wrote a python script as follows, to test the use of modulo 1 to extract the decimal part of a float x:

while( condition ):
        x *= 2
        print(x)
        decimal = x%1
        print(decimal)

this is a sample output:

x: 1.44
x%1: 0.43999999999999995
x: 2.88
x%1: 0.8799999999999999
....
....

Could someone please explain the reason for the loss of accuracy after applying modulo 1? The 53 bits precision for a float are enough to represent 0.44. What operation (on IEEE 754 notation I assume) causes the loss of precision to 0.43999999999999995?
I am using python 3.6

It is clear such errors can be found in floating point math. But I wonder here if someone knows what operation triggered this precision loss. I.e. what happened to the initial IEEE 754 representation and why?

chatzipr
  • 3
  • 4
  • 3
    Possible duplicate of [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – jonrsharpe Jun 20 '18 at 16:28
  • The closest you can get to 1.44 using double precision is 1.439999999999999946709294817992486059665679931640625. When you subtract 1.0 this becomes 0.439999999999999946709294817992486059665679931640625. Useful tool: https://www.exploringbinary.com/floating-point-converter/ – Paul R Jun 20 '18 at 16:32
  • `The 53 bits precision for a float are enough to represent 0.44` no, there will never be enough bits for 0.44 in binary – phuclv Jun 20 '18 at 17:24

0 Answers0