I have a worker process that goes like this:
class worker(Process):
def __init__(self):
# init stuff
def run(self):
# do stuff
logging.info("done") # to confirm that the process is done running
And I start 3 processes like this:
processes = 3
aproc = [None for _ in processes]
bproc = [None for _ in processes]
for i in range(processes):
aproc[i] = worker(foo, bar)
bproc[i] = worker2(foo, bar) # different worker class
aproc[i].start()
bproc[i].start()
However, at the end of my code, I .join each of the processes, but they just hang and the script never ends.
for i in range(processes):
aproc[i].join()
bproc[i].join()
Hitting CTRL+C gives me this traceback:
Traceback (most recent call last):
File "[REDACTED]", line 571, in <module>
sproc[0].join()
File "/usr/lib/python3.9/multiprocessing/process.py", line 149, in join
res = self._popen.wait(timeout)
File "/usr/lib/python3.9/multiprocessing/popen_fork.py", line 43, in wait
return self.poll(os.WNOHANG if timeout == 0.0 else 0)
File "/usr/lib/python3.9/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
I've heard of the typical deadlock, but this shouldn't be the case since all the processes print the logging statement that they are done running. Why is .join() still waiting on them? Any ideas? Thank you!
Edit: Unfortunately I can't get a minimal example working to share. Also, they do communicate with each other through multiprocessing.Queue()s, if that is relevant.
Edit 2: Traceback of another test:
Traceback (most recent call last):
File "/usr/lib/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/usr/lib/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python3.9/multiprocessing/queues.py", line 201, in _finalize_join
thread.join()
File "/usr/lib/python3.9/threading.py", line 1033, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.9/threading.py", line 1049, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):