Something that's been driving me crazy with python... I used to think it was just Windows, but I was wrong. I can have the same exact code and run it multiple times and it executes in wildly different amounts of time. Take the following test code, for example:
import math
def fib(count):
x = 0
while x < count:
a = int(((((1 + math.sqrt(5)) / 2) ** x) - (((1 - math.sqrt(5)) / 2) ** (x))) / math.sqrt(5))
x+=1
if __name__ == '__main__':
import timeit
t = timeit.Timer("fib(1250)", setup="from __main__ import fib",)
#print t.timeit(10)
count = 10000
results = t.repeat(count, 1)
min = 0xFFFF
max = 0
sum = 0
for i in results:
i = i*1000.0
if i < min: min = i
if i > max: max = i
sum+=i
print "Min {:.3f} | Max {:.3f} | Max/Min {:.3f} | Avg {:.3f}".format(min, max, max/min, sum/count)
Basically, it generates the first 1250 elements of fibonacii 10,000 times and uses timeit to get the amount of time each run takes. I then coalesce those times and find min, max, average and variance between min and max (the spread, if you will).
Here's the results:
Windows: Min 3.071 | Max 8.903 | Max/Min 2.899 | Avg 3.228
Mac OS: Min 1.531 | Max 3.167 | Max/Min 2.068 | Avg 1.621
Ubuntu: Min 1.242 | Max 10.090 | Max/Min 8.123 | Avg 1.349
So, Linux is the fastest but also has the most variance. By a LOT. But all of them can have a pretty wild swing: Only 200% for Mac, but 290% for Windows and 810% for linux!
Is it actually taking that much different time to execute? Is timeit not accurate enough? Is there something else I am missing? I'm working a lot with generating animations and I need as consistent time as possible.