104

Whats the difference between ThreadPool and Pool in multiprocessing module. When I try my code out, this is the main difference I see:

from multiprocessing import Pool
import os, time

print("hi outside of main()")

def hello(x):
    print("inside hello()")
    print("Proccess id: ", os.getpid())
    time.sleep(3)
    return x*x

if __name__ == "__main__":
    p = Pool(5)
    pool_output = p.map(hello, range(3))

    print(pool_output)

I see the following output:

hi outside of main()
hi outside of main()
hi outside of main()
hi outside of main()
hi outside of main()
hi outside of main()
inside hello()
Proccess id:  13268
inside hello()
Proccess id:  11104
inside hello()
Proccess id:  13064
[0, 1, 4]

With "ThreadPool":

from multiprocessing.pool import ThreadPool
import os, time

print("hi outside of main()")

def hello(x):
    print("inside hello()")
    print("Proccess id: ", os.getpid())
    time.sleep(3)
    return x*x

if __name__ == "__main__":
    p = ThreadPool(5)
    pool_output = p.map(hello, range(3))

    print(pool_output)

I see the following output:

hi outside of main()
inside hello()
inside hello()
Proccess id:  15204
Proccess id:  15204
inside hello()
Proccess id:  15204
[0, 1, 4]

My questions are:

  • why is the “outside __main__()” run each time in the Pool?

  • multiprocessing.pool.ThreadPool doesn't spawn new processes? It just creates new threads?

  • If so whats the difference between using multiprocessing.pool.ThreadPool as opposed to just threading module?

I don't see any official documentation for ThreadPool anywhere, can someone help me out where I can find it?

martineau
  • 119,623
  • 25
  • 170
  • 301
ozn
  • 1,990
  • 3
  • 26
  • 37
  • As I know, because of GIL in Python, the multithreading of Python looks like the multi-thread but it's not real. If you want to take advantage of your multi-cores with python, you need to use multi-processing. In modern computer, creating a process and creating a thread have almost the same cost. – Yves Sep 05 '17 at 03:53
  • 3
    Creating a thread may have similar cost to creating a process, but communicating between threads has very different cost to communicating between processes (unless perhaps you used shared memory). Also, your comment about the GIL is only partly true: it is released during I/O operations and by some libraries (e.g. numpy) even during CPU-bound operations. Still, the GIL is ultimately the reason for using separate processes in Python. – Arthur Tacca Sep 05 '17 at 07:31
  • @Yves That may be true on *nix, through the use of `fork`, but it's not true on Windows and fails to take into account the additional overhead, limitations and complexity of communicating between processes as opposed to threads (on all platforms). – Basic Apr 16 '18 at 09:59
  • 2
    To answer the question on `threading` versus `ThreadPool`, in `threading` has no easy direct way to get the return value(s) of the worker functions. Whereas, in `ThreadPool` you can easily get the return value(s) of the worker functions. – daparic Jul 11 '18 at 19:15

2 Answers2

133

The multiprocessing.pool.ThreadPool behaves the same as the multiprocessing.Pool with the only difference that uses threads instead of processes to run the workers logic.

The reason you see

hi outside of main()

being printed multiple times with the multiprocessing.Pool is due to the fact that the pool will spawn 5 independent processes. Each process will initialize its own Python interpreter and load the module resulting in the top level print being executed again.

Note that this happens only if the spawn process creation method is used (only method available on Windows). If you use the fork one (Unix), you will see the message printed only once as for the threads.

The multiprocessing.pool.ThreadPool is not documented as its implementation has never been completed. It lacks tests and documentation. You can see its implementation in the source code.

I believe the next natural question is: when to use a thread based pool and when to use a process based one?

The rule of thumb is:

  • IO bound jobs -> multiprocessing.pool.ThreadPool
  • CPU bound jobs -> multiprocessing.Pool
  • Hybrid jobs -> depends on the workload, I usually prefer the multiprocessing.Pool due to the advantage process isolation brings

On Python 3 you might want to take a look at the concurrent.future.Executor pool implementations.

noxdafox
  • 14,439
  • 4
  • 33
  • 45
  • Thanks for the answer. I just want to understand this statement: Note that this happens only if the spawn process creation method is used (only method available on Windows). If you use the fork one (Unix), you will see the message printed only once as for the threads. Im assuming, the "spawn" and "fork" are implicit when I call the "map()" or "Pool()"? Or is this something I can control? – ozn Sep 05 '17 at 22:05
  • The explanation is in the link I gave you above when mentioning the [spawn](https://docs.python.org/3.6/library/multiprocessing.html#contexts-and-start-methods) start method. You can control it, but the start methods availability depends on the OS platform. I assume you are using Windows as the default start strategy is the `spawn` one. If so, there's little to do as Windows only support `spawn`. – noxdafox Sep 06 '17 at 06:59
  • 9
    Is the comment about the unfinished implementation of `ThreadPool` still valid in 2019 with Python 3.7? – Cedric H. Jan 07 '19 at 10:31
  • Yes it is. As you can see from the linked source and the lack of documentation. – noxdafox Jan 07 '19 at 17:01
  • 1
    Because the CPU is not the bottleneck hence threads can preempt and execute during the time that a thread would have been spinning cpu idle waiting for IO. – MrR Apr 02 '19 at 17:29
  • 3
    @MrR, which is absolutely reasonable and true, but that does not actually address **why** IO bound jobs _should_ prefer ThreadPool over a Pool (process); although, I would imagine that is answerable simply by common sense regarding the time it takes to fork off an entire subprocess as well as the additional overhead caused by not being able to share the same resources. – Spencer D Oct 27 '19 at 02:23
  • 2
    if you can choose between threads and cpus, given the same benefits, you should always go threads as, yes, less overhead. – MrR Oct 29 '19 at 10:45
  • 3
    Another reason to use Process as opposed to Thread is if the libraries involved in mp are NOT thread-safe. One such notable library is Pandas. If you want to use Pandas to execute several big data queries concurrently, than Process may be the safest way to go since Processes do not share thread state with one another. – Walter Kelt Mar 18 '22 at 11:41
  • If you use threads, you can add 60 something threads in windows. What would happen if you started 60 processes with only 4 cores? For my IO bound project I am working on ThreadPools worked well while Pool failed. – Steve Scott Jan 06 '23 at 22:47
  • If you need to share data between two tasks, use threads. That's the only criteria. – rjhcnf Mar 03 '23 at 06:50
2

Concerning the applicability, the current docs (3.10 & 3.11) address it pretty well. TL;DR: don't use multiprocessing ThreadPool.

Note A ThreadPool shares the same interface as Pool, which is designed around a pool of processes and predates the introduction of the concurrent.futures module. As such, it inherits some operations that don’t make sense for a pool backed by threads, and it has its own type for representing the status of asynchronous jobs, AsyncResult, that is not understood by any other libraries. Users should generally prefer to use concurrent.futures.ThreadPoolExecutor, which has a simpler interface that was designed around threads from the start, and which returns concurrent.futures.Future instances that are compatible with many other libraries, including asyncio.

Sam
  • 91
  • 5