I am completely new to parallelisation. I would like to parallelise a nested for-loop and store some intermediate results. The results come from a function f that takes some formal parameters and some values from global variables. I got some suggestions from here, for example I use the itertools to produce a cartesian product which is equivalent to a nested loop. But it doesn't seem to work. The array where I want to store the intermediate results stays unchanged. A minimal working example is attached.
OS: Windows 7 64 Bit
Python Distribution: Canopy Enthought
import itertools
import numpy as np
from multiprocessing import Pool
list1 = range(4, 8)
list2 = range(6, 9)
ary = np.zeros( (len(list1), len(list2)) )
#This is the archetypical function f. It DOES NOT have p2 as a parameter! This
#is intended! In my (more complex) program a function f calls somewhere deep
#down another function that gets its values from global variables. Rewriting
#the code to hand down the variables as parameters would turn my code into a mess.
def f(p1):
return p1*p2
#This is what I want to parallelize: a nested loop, where the result of f is saved
#in an array element corresponding to the indices of p1 and p2.
#for p1 in list1:
# for p2 in list2:
# i = list1.index(p1)
# j = list2.index(p2)
# ary[i,j]=f(p1)
#Here begins the try to parallelize the nested loop. The function g calls f and
#does the saving of the results. g takes a tuple x, unpacks it, then calculates
#f and saves the result in an array.
def g(x):
a, b = x
i = list1.index(a)
j = list2.index(b)
global p2
p2 = b
ary[i,j] = f(a)
if __name__ == "__main__":
#Produces a cartesian product. This is equivalent to a nested loop.
it = itertools.product(list1, list2)
pool = Pool(processes=2)
result = pool.map(g, it)
print ary
#Result: ary does not change!