59

I read the docs trying to get a basic understanding but it only shows that ProcessPoolExecutor allows to side-step the Global Interpreter Lock which I think is the way to lock a variable or function so that parallel processes do not update its value at the same time.

What I am looking for is when to use ProcessPoolExecutor and when to use ThreadPoolExecutor and what should I keep in mind while using each approach!

ananya
  • 879
  • 1
  • 7
  • 14

2 Answers2

83

ProcessPoolExecutor runs each of your workers in its own separate child process.

ThreadPoolExecutor runs each of your workers in separate threads within the main process.

The Global Interpreter Lock (GIL) doesn't just lock a variable or function; it locks the entire interpreter. This means that every builtin operation, including things like listodicts[3]['spam'] = eggs, is automatically thread-safe.

But it also means that if your code is CPU-bound (that is, it spends its time doing calculations rather than, e.g., waiting on network responses), and not spending most of its time in an external library designed to release the GIL (like NumPy), only one thread can own the GIL at a time. So, if you've got 4 threads, even if you have 4 or even 16 cores, most of the time, 3 of them will be sitting around waiting for the GIL. So, instead of getting 4x faster, your code gets a bit slower.

Again, for I/O-bound code (e.g., waiting on a bunch of servers to respond to a bunch of HTTP requests you made), threads are just fine; it's only for CPU-bound code that this is an issue.

Each separate child process has its own separate GIL, so this problem goes away—even if your code is CPU-bound, using 4 child processes can still make it run almost 4x as fast.

But child processes don't share any variables. Normally, this is a good thing—you pass (copies of) values in as the arguments to your function, and return (copies of) values back, and the process isolation guarantees that you're doing this safely. But occasionally (usually for performance reasons, but also sometimes because you're passing around objects that can't be copied via pickle), this is not acceptable, so you either need to use threads, or use the more complicated explicit shared data wrappers in the multiprocessing module.

abarnert
  • 354,177
  • 51
  • 601
  • 671
  • Regarding threading, why is it good for lots of HTTP requests? i.e. I've 3 requests and 3 threads, each time CPU polls one of the threads and does a small progress in getting the data how is it better than getting all the data at once from each request one by one? I can rephrase the question, when a sleeping thread sends a web request, does he get answer even if CPU runs another thread? if so how is it possible? – TheLogicGuy Jan 14 '22 at 10:25
51

ProcessPool is for CPU bound tasks so you can benefit from multiple CPU.

Threads is for io bound tasks so you can benefit from io wait.

M.Rau
  • 764
  • 5
  • 10
  • 3
    I really like how succinct this answer is. This answer was insufficient to meet my needs alone (it didn't explain why, so I wouldn't just run with it), but it really helped me understand the approved answer by having no uncertain terms and limited jargon. – Slartibartfast Jun 03 '22 at 19:30
  • This is good as a rule of thumb but a bit simplified. If you have many short-lived CPU-bound tasks, it may actually be better with threads due to the overhead of creating and managing multiple processes, as the cost of creating a process can be significant compared to the amount of time spent actually executing the task. – hwaxxer Feb 20 '23 at 17:45