I have seen a number of questions on using python's multiprocessing
but I have not been able to quite wrap my head around on how to use this on my code.
Suppose I have a NxM array. I have a function f
that compares the value at (i,j) to every other pixel. So, in essence, I compute NxM values at every point on the grid.
My machine has four cores. I envision that I would split the input locations in four quadrants and then feed each quadrant to a different process.
So, schemetically, my current code would be:
def f(x, array):
"""
For input location x, do some operation
with every other array value. As a simple example,
this could be the product of array[x] with all other
array values
"""
return array[x]*array
if __name__ == '__main__':
array = some_array
for x in range(array.size):
returnval = f(x,array)
`
What would be the best strategy in optimizing a problem like this using multiprocessing?