1

n=iterations

for some reason this code will need a lot more iterations for more accurate result from other codes, Can anyone explain why this is happening? thanks.

    n,s,x=1000,1,0
    for i in range(0,n,2):
            x+=s*(1/(1+i))*4
            s=-s
    print(x)
Mike L
  • 1,955
  • 2
  • 16
  • 18

2 Answers2

3

As I mentioned in a comment, the only way to speed this is to transform the sequence. Here's a very simple way, related to the Euler transformation (see roippi's link): for the sum of an alternating sequence, create a new sequence consisting of the average of each pair of successive partial sums. For example, given the alternating sequence

a0 -a1 +a2 -a3 +a4 ...

where all the as are positive, the sequences of partial sums is:

s0=a0  s1=a0-a1  s2=a0-a1+a2  s3=a0-a1+a2-a3  s4=a0-a1+a2-a3+a4 ...

and then the new derived sequence is:

(s0+s1)/2  (s1+s2)/2  (s2+s3)/2  (s3+s4)/2 ...

That can often converge faster - and the same idea can applied to this sequence. That is, create yet another new sequence averaging the terms of that sequence. This can be carried on indefinitely. Here I'll take it one more level:

from math import pi

def leibniz():
    from itertools import count
    s, x = 1.0, 0.0
    for i in count(1, 2):
        x += 4.0*s/i
        s = -s
        yield x

def avg(seq):
    a = next(seq)
    while True:
        b = next(seq)
        yield (a + b) / 2.0
        a = b

base = leibniz()
d1 = avg(base)
d2 = avg(d1)
d3 = avg(d2)

for i in range(20):
    x = next(d3)
    print("{:.6f} {:8.4%}".format(x, (x - pi)/pi))

Output:

3.161905  0.6466%
3.136508 -0.1619%
3.143434  0.0586%
3.140770 -0.0262%
3.142014  0.0134%
3.141355 -0.0076%
3.141736  0.0046%
3.141501 -0.0029%
3.141654  0.0020%
3.141550 -0.0014%
3.141623  0.0010%
3.141570 -0.0007%
3.141610  0.0005%
3.141580 -0.0004%
3.141603  0.0003%
3.141585 -0.0003%
3.141599  0.0002%
3.141587 -0.0002%
3.141597  0.0001%
3.141589 -0.0001%

So after just 20 terms, we've already got pi to about 6 significant digits. The base Leibniz sequence is still at about 2 digits correct:

>>> next(base)
3.099944032373808

That's an enormous improvement. A key point here is that the partial sums of the base Leibniz sequence give approximations that alternate between "too big" and "too small". That's why averaging them gets closer to the truth. The same (alternating between "too big" and "too small") is also true of the derived sequences, so averaging their terms also helps.

That's all hand-wavy, of course. Rigorous justification probably isn't something you're interested in ;-)

Tim Peters
  • 67,464
  • 13
  • 126
  • 132
  • Great idea and interesting script! I will try to learn from it. Could you look at my question https://stackoverflow.com/questions/56488146/subsequence-from-madhava-leibniz-series-as-fast-as-possible-with-python? Thank you! – Alex Lopatin Jun 07 '19 at 17:48
2

That is because you are using the Leibniz series and it is known to converge very (very) slowly.

roippi
  • 25,533
  • 4
  • 48
  • 73
  • I know it is Leibniz formula, The question is if this code can be written differently to get better accuracy in less iterations? (using Leibniz) – Mike L Oct 23 '13 at 19:20
  • @BlacklightShining this is a *direct* answer to the question as stated: `for some reason this code will need a lot more iterations for more accurate result from other codes, Can anyone explain why this is happening?` – roippi Oct 24 '13 at 00:39
  • Gah! Wrong post. Oops. – Blacklight Shining Oct 24 '13 at 00:39
  • @MikeL I doubt it. If you're using the exact same mathematical algorithm, you'll get the exact same accuracy for a given number of iterations. – Blacklight Shining Oct 24 '13 at 00:40
  • @MikeL, follow the link roippi gave you. Read the "Inefficiency" section. It briefly describes several *transformations* you can use to accelerate convergence. The Leibniz series as-is is hopelessly inefficient, and no method of merely rearranging the computations it makes will be of any help. – Tim Peters Oct 24 '13 at 01:59