The following code blocks and does not let the program exit:
import multiprocessing
q = multiprocessing.Queue()
for i in range(10000):
q.put("x" * 1000)
# Un-commenting the next line lets the program exit
# q.close()
print("trying to exit")
I've run it many times with Python 3.6 and 3.7 using the official docker images, in Docker for Mac, and each time I had to interrupt it (Ctrl+C
).
The stack trace suggests a deadlock during the finalization of the queue:
^CError in atexit._
run_exitfuncs:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/multiprocessing/util.py", line 265, in _run_finalizers
finalizer()
File "/usr/local/lib/python3.7/multiprocessing/util.py", line 189, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/local/lib/python3.7/multiprocessing/queues.py", line 192, in _finalize_join
thread.join()
File "/usr/local/lib/python3.7/threading.py", line 1044, in join
self._wait_for_tstate_lock()
File "/usr/local/lib/python3.7/threading.py", line 1060, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
Remarks:
- this is obviously a very small subset of the actual program that made me discover this behavior
- emptying the queue (calling
q.get_nowait
untilqueue.Empty
) does not help - putting fewer or smaller items in the queue lets the program exit
- adding a
q.close()
also lets the program exit - same with a
q = None
, which lets the queue be garbage-collected (and thus.close
d)
Questions:
- am I doing something wrong?
- is there an implicit limit on the total size of the elements in a
multiprocessing.Queue
? multiprocessing.Queue.close
is documented as "usually unnecessary for most code"; in what cases is it necessary? It seems very necessary in my case. Why?- is this just a bug in
multiprocessing.Queue
?