If you're running python 3.2+, you have access to concurrent.futures
, a high-level threading module which conveniently refers to its interface as Executor
. It comes in both ThreadPoolExecutor
and ProcessPoolExecutor
flavors.
#adapted example from docs
with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor:
future_results = [executor.submit(my_func, *args) for _ in range(100000)]
for result in concurrent.futures.as_completed(future_results):
#do something with each result
More fleshed-out example of retrieving the results of many urllib.request
calls can be found in the docs. It should be noted that that is what you should be using many threads for - tasks that are I/O bound. Tasks that are processor-bound will not benefit from multithreading due to the GIL - for that, use ProcessPoolExecutor
(which leverages multiprocessing
).