I have a python application that (in parallel) spawns subprocesses (bash scripts mostly). Some of the scripts may call other scripts. I'm trying to work out the best way to handle termination edge cases for the application, and the subprocesses.
If the application needs to quit, or receives a SIGTERM
, then it should terminate (SIGTERM
, wait
, SIGKILL
) all subprocesses and any processes they created. An approach for this would be to start as a new process group and kill the process group as part of the termination (killpg
).
If any of the subprocesses takes longer than a specified time, I would like to kill them and and child processes they created. An approach here is to set application as a process group leader so that I may just killpg the group and rely on that to kill any other subprocesses.
The hard bit is that these two solutions conflict with each other, and so I seem to only be able to satisfy one requirement.
So, a final thought is to use tcsetpgrp, but I'm not overly familiar with it. So, something like simulate an interactive terminal. This would mean that killing the application sends a SIGHUP
(i think) to all processes, and I can use process groups to manage killing subprocesses that take too long.
Is this a good idea, or are there any other suggestions I'm missing?
Bonus section:
If the application is killed via SIGKILL
(it is needed occassionally in this application, yes i know SIGKILL
should be avoided, etc...), it would be amazing to have the subprocesses killed as well in the same way that bash sends a SIGHUP
to its processes when it exits.