Today, I've stumbled on some frustrating behavior of multiprocessing.Queue
s.
This is my code:
import multiprocessing
def make_queue(size):
ret = multiprocessing.Queue()
for i in range(size):
ret.put(i)
return ret
test_queue = make_queue(3575)
print(test_queue.qsize())
When I run this code, the process exits normally with exit code 0.
However, when I increase the queue size to 3576 or above, it hangs. When I send SIGINT to it through Ctrl-C, it raises the error here:
Exception ignored in atexit callback: <function _exit_function at 0x7f91104f9360>
Traceback (most recent call last):
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
_run_finalizers()
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
thread.join()
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/threading.py", line 1096, in join
self._wait_for_tstate_lock()
File "/home/captaintrojan/.conda/envs/example_env/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
if lock.acquire(block, timeout):
KeyboardInterrupt:
Can anyone please explain this behavior? I've experimented with the sizes, indeed, from a sample of 40 or so different sizes, any size below or equal to 3575 works fine and any size above 3575 hangs the process. I figured it may have something to do with the queue size in bytes, because if I insert i*i
or some random strings instead of i
, the threshold changes. Note that, unless multiprocessing.Queue
does something suspicious in the background, I don't create any additional processes other than the main process. Also, adding test_queue.close()
has no impact on the outcome.