On Linux it takes 1.09171080828 secs.
On Windows it takes 2.14042000294 secs.
Code for the benchmark:
import time
def mk_array(num):
return [x for x in xrange(1,num)]
def run():
arr = mk_array(10000000)
x = 0
start = time.time()
x = reduce(lambda x,y: x + y, arr)
done = time.time()
elapsed = done - start
return elapsed
if __name__ == '__main__':
times = [run() for x in xrange(0,100)]
avg = sum(times)/len(times)
print (avg)
I am aware that the GIL creates more or less single threaded scripts.
Windows box is my Hyper-V host but should be beefy enough to run a single threaded script at full bore. 12-cores 2.93Ghz Intel X5670s, 72GB ram, etc.
Ubuntu VM has 4-cores and 8GB of ram.
Both are running Python 2.7.8 64-bit.
Why is windows half as fast?
edit: I've lopped two zeros and linux finishes in 0.010593495369 seconds, and windows in 0.171899962425 seconds. Thanks everyone, curiosity satisfied.