I quite often write simple optimization routines that look something like this:
def createinstance(n):
while(True):
#create some instance called instance
yield instance
loopno = 100000
n= 100
min = 0
for i in xrange(loopno):
for inst in createinstance(n):
value = foo(inst)
if (value < min):
min = value
print min
I would like to be able to use all of the cores on my machine to do this.
A very simple method would just split the range into parts and farm them out to the cores and collect the results at the end.
A better method would have the cores request a batch of instances when they are idle.
What's a nice way to solve this problem for maximum efficiency in python? Maybe this is standard enough to be a community question?
This question seems to be an even simpler version of Solving embarassingly parallel problems using Python multiprocessing .