1

When I execute this code it's generate arithmetical divergence or not with very close float numbers it's occur when the numbers are to the form 2**n-p/q can produce a acceptable result and sometimes a very fast divergence. I have read some documentation about floating point arithmetic but I think the matter is elsewhere but where ? If anyone has an idea I'll be very happy to understand the matter...

I have tried to execute the code on Python 3.4.5 (32bits) and I have try it online with repl.it and trinket here url[https://trinket.io/python3/d3f3655168] the results were similars .

#this code illustrates arithmetical divergence with floating point numbers
# on Python 3.4 an 3.6.6

def ErrL(r):
    s=1
    L=[]
    for k in range(10):
        s=s*(r+1)-r
        L.append(s)
    return L

print(ErrL(2**11-2/3.0)) # this number generate a fast divergence in loop for
#[0.9999999999997726, 0.9999999995341113, 0.9999990457047261, 0.9980452851802966, -3.003907522359441, -8200.33724163292, -16799071.44994476, -34410100067.30351, -70483354973240.67, -1.4437340543685667e+17]

print(ErrL(2**12-1/3.0)) # this number generate a fast divergence in loop for
#[0.9999999999995453, 0.9999999981369001, 0.9999923674999991, 0.968732191662184, -127.09378815725313, -524756.5521508802, -2149756770.9781055, -8806836909202.637, -3.607867520470422e+16, -1.4780230608860496e+20]

print(ErrL(2**12-1/10.0)) # this number generate a fast divergence in loop for
#[0.9999999999995453, 0.9999999981369001, 0.9999923670652606, 0.9687286296662023, -127.11567712053602, -524876.117595124, -2150369062.0754633, -8809847014512.865, -3.609306223376185e+16, -1.478696666654989e+20]

print(ErrL(2**12-1/9.0)) # no problem here
#[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

print(ErrL(2**12-1/11.0)) # no problem here
#[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

What I expect is obviously an ten ones vector !

  • 2
    Possible duplicate of [Is floating point math broken?](https://stackoverflow.com/questions/588004/is-floating-point-math-broken) – sanyassh Mar 29 '19 at 18:32
  • Thanks but I have yet read this topic, it's not the same problem because its seems to work with other Python versions and when the divergence occurs it's very fast... – Rossignol_fr Mar 29 '19 at 18:40
  • 1
    Unless you are using `from __future__ import division`, your results under Python 2 are irrelevant. – chepner Mar 29 '19 at 18:49

2 Answers2

2

When executing this code with Python 2, / between integers means integer division (which is now called // in Python 3).

So, in this case, 2/3, 1/3 and so on are all equal to 0, and what you get is ErrL(2**11), ..., which will always be 1.

With Python 3, 2/3 is a float, and not 0, which explains why you get different results.

Thierry Lathuille
  • 23,663
  • 10
  • 44
  • 50
  • `//` isn't integer division; it's floored division, as it is also defined for `float`s. – chepner Mar 29 '19 at 18:53
  • Thanks for this explanation so if I want floating point division with Python 2.7 I must do 2.0/3.0 I suppose. Ok for the difference between Python 2.x an 3.x but what can explain the difference for the divergence between the numbers 2^n-1/3 and 2^n-1/9 ? – Rossignol_fr Mar 29 '19 at 18:54
  • So I have modified the code to be python 2 compatible and there is no difference with python 3 the divergence occurs identically... – Rossignol_fr Mar 29 '19 at 19:01
  • That is to be expected, the problem comes from the limited precision of the representation of floating point numbers. You can have a look at the link given in the comments: https://stackoverflow.com/questions/588004/is-floating-point-math-broken – Thierry Lathuille Mar 29 '19 at 19:05
1

The reason why the divergence is very fast is mathematics. If you write f(s) = (r+1) * s - r, it becomes evident that the derivative is the constant r+1. According to wikipedia 64 bits floats in IEE 754 representation have a mantissa of 53 bits. With r being close to 2**11, an error on last bit will require less than 5 steps be become close to 1. And with numbers close to 2**12 it explodes at the 5th iteration, which is what you obtain.

Phew, the maths I learned 40 years ago are still not broken...

Serge Ballesta
  • 143,923
  • 11
  • 122
  • 252