Consider the following example:
from multiprocessing import Queue, Pool
def work(*args):
print('work')
return 0
if __name__ == '__main__':
queue = Queue()
pool = Pool(1)
result = pool.apply_async(work, args=(queue,))
print(result.get())
This raises the following RuntimeError
:
Traceback (most recent call last):
File "/tmp/test.py", line 11, in <module>
print(result.get())
[...]
RuntimeError: Queue objects should only be shared between processes through inheritance
But interestingly the exception is only raised when I try to get
the result, not when the "sharing" happens. Commenting the corresponding line silences the error while I actually did share the queue (and work
is never executed!).
So here goes my question: Why is this exception only raised when the result is requested, and not when the apply_async
method is invoked even though the error seems to be recognized because the target work
function is never called?
It looks like the exception occurs in a different process and can only be made available to the main process when inter-process communication is performed in form of requesting the result. Then, however, I'd like to know why such checks are not performed before dispatching to the other process.
(If I used the queue in both work
and the main process for communication then this would (silently) introduce a deadlock.)
Python version is 3.5.2.
I have read the following questions:
- Sharing many queues among processes in Python
- How do you pass a Queue reference to a function managed by pool.map_async()?
- Sharing a result queue among several processes
- Python multiprocessing: RuntimeError: “Queue objects should only be shared between processes through inheritance”
- Python sharing a lock between processes