I have a program which bottleneck is API calls, so I want to make the API calls to be done at the same time. In pseudocode, this is what I would like:
from multiprocessing import Process, Manager
urls = ['www.example.com/item/1', 'www.example.com/item/2', 'www.example.com/item/3']
def get_stats(url, d):
data = http.get(url)
d[data['name']] = data['data']
manager = Manager()
d = manager.dict()
for url in urls:
p = Process(target=get_stats, args=(url, d))
p.start()
p.join()
print d
The only thing is that these processes don't seem to be running in parallel.
Is it because I am placing the join()
after starting the process?
What is the best way to implement this?