I have the following code:
def process_url(url):
print '111'
r = requests.get(url)
print '222' # <-- never even gets here
return
urls_to_download = [list_or_urls]
PARALLEL_WORKERS = 4
pool = Pool(PARALLEL_WORKERS)
pool.map_async(process_url, urls_to_download)
pool.close()
pool.join()
Every time I do this, it runs the first four items and then just hangs. I don't think it's a timeout issue, as it is extremely fast to download the four urls. It is just after fetching those first four it hangs indefinitely.
What do I need to do to remedy this?