The past days I've been trying get a better understanding of computational complexity and how to improve Python code. For this I have tried out different functions for calculating Fibonacci numbers, comparing how long the script runs if I make small changes.
I'm calculating Fibonacci numbers using a list, adding the sum of element -2 and -1 from the list.
I was puzzled to find that if I add a .pop() in the loop, deleting not needed elements of my list, my script runs significantly faster. I don't see why this is. Each step in the loop the computer does one more thing. So my untrained intuition suggests that this should increase computational time. Is 'looking up' the last element of the list so much slower when the list is very long?
Here is my code:
import time
import numpy as np
def fib_stack1(n):
""" Original function """
assert type(n) is int, 'Expected an integer as input.'
if n < 2:
return n
else:
stack = [0, 1]
for i in range(n-1):
stack.append(stack[-1] + stack[-2])
return stack[-1]
def fib_stack2(n):
""" Modified function """
assert type(n) is int, 'Expected an integer as input.'
if n < 2:
return n
else:
stack = [0, 1]
for i in range(n-1):
stack.append(stack[-1] + stack[-2])
### CHANGE ###
stack.pop(-3)
##############
return stack[-1]
rec1 = []
rec2 = []
for _ in range(10):
t1 = time.time()
fib_stack1(99999)
t2 = time.time()
rec1.append(t2-t1)
t1 = time.time()
fib_stack2(99999)
t2 = time.time()
rec2.append(t2-t1)
print(np.array(rec1).mean())
print(np.array(rec2).mean())
The output is the following:
# Original
0.26878631115
# Modified
0.145034956932