33

I'm spawning 5 different processes from a python script, like this:

p = multiprocessing.Process(target=some_method,args=(arg,))
p.start()

My problem is, when, somehow the parent process (the main script) gets killed, the child processes keeps on running.

Is there a way to kill child processes, which are spawned like this, when the parent gets killed ?

EDIT: I'm trying this:

p = multiprocessing.Process(target=client.start,args=(self.query_interval,))
p.start()
atexit.register(p.terminate)

But this doesnt seem to be working

Saurabh Verma
  • 6,328
  • 12
  • 52
  • 84
  • 3
    Duplicate? http://stackoverflow.com/questions/14128410/killing-child-process-when-parent-crashes-in-python – theAlse Aug 28 '14 at 06:40
  • I had gone through this post, it specifically talks about 'popen' and subprocess – Saurabh Verma Aug 28 '14 at 06:42
  • How does the parent process get killed? – Korem Aug 28 '14 at 07:14
  • lets say we use kill -9 to kill the parent process – Saurabh Verma Aug 28 '14 at 07:15
  • 1
    atexit runs only on normal termination. If you're sending kill use [signal](https://docs.python.org/2/library/signal.html), for example `signal.signal(signal.SIGTERM, func)` – Korem Aug 28 '14 at 07:17
  • 1
    You **cannot** do anything when the process is killed with `kill -9`. That's why you should use that only after trying to terminate the program normally. – Bakuriu Aug 28 '14 at 07:47
  • In that case, maybe have the child processes periodically check for the existence of the parent process? – Korem Aug 28 '14 at 08:49
  • Duplicate: http://stackoverflow.com/questions/1884941/killing-the-child-processes-with-the-parent-process (this one has a solution with `PR_SET_DEATHSIG`, although that's Linux-only solution), see also http://stackoverflow.com/questions/284325/how-to-make-child-process-die-after-parent-exits – drdaeman Sep 03 '14 at 10:50

4 Answers4

30

I've encounter the same problem myself, I've got the following solution:

before calling p.start(), you may set p.daemon=True. Then as mentioned here python.org multiprocessing

When a process exits, it attempts to terminate all of its daemonic child processes.

flyingfoxlee
  • 1,764
  • 1
  • 19
  • 29
  • 18
    I want to mention, that if parent process was killed using SIGKILL (kill -9) then daemon processes won't stop. – Dmitry Apr 11 '18 at 19:17
  • how to set in multiprocessing.Pool? – Alex Dec 30 '20 at 07:10
  • @Dmitry, is there a way to kill daemon processes using SIGKILL or other signals? – Zac May 06 '21 at 11:06
  • @Zac I would recommend a least to take a look at systemd https://superuser.com/a/708282/893880 or the whole app can be packaged in docker container. – Dmitry May 06 '21 at 16:04
6

The child is not notified of the death of its parent, it only works the other way.

However, when a process dies, all its file descriptors are closed. And the other end of a pipe is notified about this, if it selects the pipe for reading.

So your parent can create a pipe before spawning the process (or in fact, you can just set up stdin to be a pipe), and the child can select that for reading. It will report ready for reading when the parent end is closed. This requires your child to run a main loop, or at least make regular calls to select. If you don't want that, you'll need some manager process to do it, but then when that one is killed, things break again.

ospider
  • 9,334
  • 3
  • 46
  • 46
Bas Wijnen
  • 1,288
  • 1
  • 8
  • 17
  • Seems it could be simpler to just setup the parent to send heartbeat messages to the child processes. No heartbeat and child kills itself. – Azmisov Jun 07 '21 at 15:56
  • 1
    That's usually not a good idea. Heartbeats will wake both the parent and the child, so neither will ever be swapped out. It's also more fragile in that the child might kill itself when the parent is only stopped, for example because it's being debugged. And since the child probably has communication with the parent anyway, you probably want to be using select on it already. Checking for EOF is much easier than implementing heartbeat machinery. – Bas Wijnen Jun 09 '21 at 23:17
2

If you have access to the parent pid you can use something like this

import os
import sys
import psutil


def kill_child_proc(ppid):
    for process in psutil.process_iter():
        _ppid = process.ppid()
        if _ppid == ppid:
            _pid = process.pid
            if sys.platform == 'win32':
                process.terminate()
            else:
                os.system('kill -9 {0}'.format(_pid))

kill_child_proc(<parent_pid>)

0

My case was using a Queue object to communicate with the child processes. For whatever reason, the daemon flag as suggested in the accepted answer does not work. Here's a minimal example illustrating how to get the children to die gracefully in this case.

The main idea is to pause child work execution every second or so and check if the parent process is still alive. If it is not alive, we close the Queue and exit.

Note this also works if the main process is killed using SIGKILL

import ctypes, sys
import multiprocessing as mp

worker_queue = mp.Queue(maxsize=10)

# flag to communicate the parent's death to all children
alive = mp.Value(ctypes.c_bool, lock=False)
alive.value = True

def worker():
    while True:
        # fake work
        data = 99.99
        # submit finished work to parent, while checking if parent has died
        queued = False
        while not queued:
            # note here we do not block indefinitely, so we can check if parent died
            try:
                worker_queue.put(data, block=True, timeout=1.0)
                queued = True
            except: pass
            # check if parent process is alive still
            par_alive = mp.parent_process().is_alive()
            if not (par_alive and alive.value):
                # for some reason par_alive is only False for one of the children;
                # notify the others that the parent has died
                alive.value = False
                # appears we need to close the queue before sys.exit will work
                worker_queue.close()
                # for more dramatic shutdown, could try killing child process;
                # wp.current_process().kill() does not work, though you could try
                # calling os.kill directly with the child PID
                sys.exit(1)

# launch worker processes
for i in range(4):
    child = mp.Process(target=worker)
    child.start()
Azmisov
  • 6,493
  • 7
  • 53
  • 70
  • Another thing to note about this snippet is the child will not kill itself until after the work has been finished. You could put the parent-alive check in another function, and call that periodically during the processing code if you want the child to kill itself sooner. – Azmisov Jun 09 '21 at 05:55