2

Consider simple setup of a child process. Basically, it is a producer(parent)-consumer(child) scenario.

class Job:
    def start_process(self):
        self.queue = multiprocessing.Queue(3)
        self.process = multiprocessing.Process(target=run,
                                               args=(self.queue))

def run(queue):
    while True:
        item = queue.get()
        ....

If I do kill -9 on the parent process the child will hang forever. I was sure that it will receive SIGHUP like with subprocess.Popen - when the python process quits the popened will quit as well. Any idea how to fix child cleanup?

Yuki
  • 3,857
  • 5
  • 25
  • 43
  • If you provide a `daemon=True` argument when you call `multiprocessing.Process()`, the child processes will automatically be killed when the main process ends (assuming you're using Python 3.3+). – martineau Oct 31 '19 at 21:28
  • Have to use pypy and using 2.7 ((. – Yuki Oct 31 '19 at 21:29
  • Don't know how pypi affects things, but for earlier versions of Python you can manually set the property with `self.process.daemon = True` before the `start()` method is called. – martineau Oct 31 '19 at 21:39
  • @martineau Does not work - orphans are still hanging. – Yuki Oct 31 '19 at 21:41
  • @martineau it is not PyPI it is PyPy, I have talked about and yes it has nothing to do with the question. – Yuki Oct 31 '19 at 21:53
  • Sorry, that was simply a typo — but otherwise my comment applies. – martineau Oct 31 '19 at 21:54
  • For clarification of the actual question here: Do you want to run a function in a the child process when the parent process is killed with a `SIGKILL`? – Ente Oct 31 '19 at 21:57
  • I want to avoid orphan processes from whatever reason they may come up, in the question for the sake of a reporoducible example I use `kill -9` on the parent, it can be anything, segment fault, human mistake from `kill `, etc. – Yuki Oct 31 '19 at 23:26

1 Answers1

0

If the daemon param doesn't work for you, you can catch a SIGINT signal and have it set a boolean value to exit the while loop in your children. ie..

import signal

g_run_loops = True
def signal_handler(signum, frame):
   global g_run_loops
   g_run_loops = False

signal.signal(signal.SIGINT, signal_handler)

def run(queue):
    global g_run_loops
    while g_run_loops:
        item = queue.get()
        ....

Note that this won't work for SIGKILL (kill -9) but should work for SIGINT (kill -2).

bivouac0
  • 2,494
  • 1
  • 13
  • 28
  • I do not think there is any `SIGINT` during `kill -9`. – Yuki Oct 31 '19 at 21:42
  • [You can not handle a `SIGKILL`](https://stackoverflow.com/a/3908710/3215929) – Ente Oct 31 '19 at 21:44
  • Agreed. A SIGKILL can't be caught but can you issue a SIGINT (ctrl-c or kill -2) instead? – bivouac0 Oct 31 '19 at 21:48
  • I am looking for some generic solution to handle crashes of python and it is children. `kill -9` is just a reproducable example of it. – Yuki Oct 31 '19 at 21:54
  • If the `daemon` param and signal handling doesn't work, I think your only option is to find them with the `ps` command and kill them separately. A bash script could simplify this if it's happening very often. The `psutil` library also has some nice functions to find processes. – bivouac0 Oct 31 '19 at 22:02
  • @bivouac0 Ofc, that is not a pleasant way. It is surprising that there is not way to make it work like it works for `popen`. – Yuki Oct 31 '19 at 22:07
  • Yeah. I run into this when I have a multiprocessing script that crashes. It tends to leave a bunch of zombies. I don't know of any way to get rid of them other than killing them manually but I'll watch this thread to see if anyone has a better suggestion. – bivouac0 Oct 31 '19 at 22:12