I want to parallelise a function that will update a shared dictionary using Pool instead of Process so that i don't over-allocate too many cpus.
i.e. can i take this
def my_function(bar,results):
results[bar] = bar*10
def paralell_XL():
from multiprocessing import Pool, Manager, Process
manager = Manager()
results=manager.dict()
jobs = []
for bar in foo:
p=Process(target=my_function, args=(bar, results))
jobs.append(p)
p.start()
for proc in jobs:
proc.join()
and change the paralell_XL() function to something like this ?
def paralell_XL():
from multiprocessing import Pool, Manager, Process
manager = Manager()
results=manager.dict()
p = Pool(processes=4)
p.map(my_function,(foo,results))
trying the above gives the following error
TypeError: unsupported operand type(s) for //: 'int' and 'DictProxy'
thanks