Is it possible to speed up this mapping function by dividing the lookups between multiple processes or threads?
for k, v in map_dict.iteritems():
result_arr[k]=input_arr[v]
Note: k, v are tuples as result_arr and input_arr are 2 dimensional.
Is it possible to speed up this mapping function by dividing the lookups between multiple processes or threads?
for k, v in map_dict.iteritems():
result_arr[k]=input_arr[v]
Note: k, v are tuples as result_arr and input_arr are 2 dimensional.
you may consider Theano or Cython and, capitalizing on Blckknght comment, using this syntax:
result_arr[map_dict.keys()] = input_arr[map_dict.values()]
trying to split the original keys list in parts, and assigning each part to a different multiprocessing Pool (as suggested by me in a comment) may hardly improve it more, even on huge sets of points.