In python 2.7 I am trying to distribute the computation of a two-dimensional array on all of the cores.
For that I have two arrays associated with a variable at the global scope, one to read from and one to write to.
import itertools as it
import multiprocessing as mp
temp_env = 20
c = 0.25
a = 0.02
arr = np.ones((100,100))
x = arr.shape[0]
y = arr.shape[1]
new_arr = np.zeros((x,y))
def calc_inside(idx):
new_arr[idx[0],idx[1]] = ( arr[idx[0], idx[1] ]
+ c * ( arr[idx[0]+1,idx[1] ]
+ arr[idx[0]-1,idx[1] ]
+ arr[idx[0], idx[1]+1]
+ arr[idx[0], idx[1]-1]
- arr[idx[0], idx[1] ]*4
)
- 2 * a
* ( arr[idx[0], idx[1] ]
- temp_env
)
)
inputs = it.product( range( 1, x-1 ),
range( 1, y-1 )
)
p = mp.Pool()
p.map( calc_inside, inputs )
#for i in inputs:
# calc_inside(i)
#plot arrays as surface plot to check values
Assume there is some additional initialization for the array arr
with some different values other than that exemplary 1
-s, so that the computation ( an iterative calculation of the temperature ) actually makes a sense.
When I use the commented out for
-loop, instead of the Pool.map()
method, everything works fine and the array actually contains values. When using the Pool()
function, the variable new_array
just stays in its initialized state ( meaning it contains only the zeros, as it was originally initialised with ).
Q1 : Does that mean that Pool()
prevents writing to global variables?
Q2 : Is there any other way to tackle this problem with parallelization?