319

Let's assume we have such a trivial daemon written in python:

def mainloop():
    while True:
        # 1. do
        # 2. some
        # 3. important
        # 4. job
        # 5. sleep

mainloop()

and we daemonize it using start-stop-daemon which by default sends SIGTERM (TERM) signal on --stop.

Let's suppose the current step performed is #2. And at this very moment we're sending TERM signal.

What happens is that the execution terminates immediately.

I've found that I can handle the signal event using signal.signal(signal.SIGTERM, handler) but the thing is that it still interrupts the current execution and passes the control to handler.

So, my question is - is it possible to not interrupt the current execution but handle the TERM signal in a separated thread (?) so that I was able to set shutdown_flag = True so that mainloop() had a chance to stop gracefully?

zerkms
  • 249,484
  • 69
  • 436
  • 539
  • 2
    I did what you are asking for before by using `signalfd` and masking out the delivery of the `SIGTERM` to the process. – Eric Urban Aug 28 '13 at 22:45

10 Answers10

477

A class based clean to use solution:

import signal
import time

class GracefulKiller:
  kill_now = False
  def __init__(self):
    signal.signal(signal.SIGINT, self.exit_gracefully)
    signal.signal(signal.SIGTERM, self.exit_gracefully)

  def exit_gracefully(self, *args):
    self.kill_now = True

if __name__ == '__main__':
  killer = GracefulKiller()
  while not killer.kill_now:
    time.sleep(1)
    print("doing something in a loop ...")
   
  print("End of the program. I was killed gracefully :)")
Parth Pandey
  • 13
  • 1
  • 3
Mayank Jaiswal
  • 12,338
  • 7
  • 39
  • 41
  • 1
    Thanks for the idea! I used a modified approach in reboot-guard. https://github.com/ryran/reboot-guard/blob/master/rguard#L284:L304 – rsaw Sep 06 '15 at 15:28
  • 14
    This is the best answer (no threads required), and should be the preferred first-try approach. – jose.angel.jimenez Oct 12 '15 at 16:56
  • @mayank-jaiswal : This is brilliant! Thanks! Just to be sure: shouldn't it be `class GracefulKiller()` ? – Mausy5043 Dec 23 '15 at 17:46
  • 3
    @Mausy5043 Python allows you to not have parenthesis for defining classes. Although it's perfectly fine for python 3.x, but for python 2.x, best practice is to use "class XYZ(object):". Reason being: https://docs.python.org/2/reference/datamodel.html#newstyle – Mayank Jaiswal Dec 24 '15 at 10:27
  • @MayankJaiswal good answer. If I want to use this class to catch USR1 signal for log rotation, how can I return the file object to the outter scope? –  Aug 31 '16 at 22:26
  • 5
    Follow up, to keep you motivated, thank you. I use this all the time. – chrisfauerbach Dec 14 '16 at 15:01
  • The `import sys` is harmless, but doesn't do anything useful here. (Chances are other parts of your code will use `sys` anyway, but not this code.) – tripleee Dec 20 '16 at 11:24
  • deleted that `import sys` – Mayank Jaiswal Dec 20 '16 at 15:05
  • 1
    Would it be atomic, I mean it looks like multi threads in risk of reading and writing the same memory in same time. – igonejack Feb 13 '18 at 14:16
  • 2
    In worse case, that would simply mean doing another iteration before shuting down gracefully. The `False` value is set only once, and then it can only go from False to True so multiple access is not an issue. – Alceste_ Jul 25 '18 at 15:35
  • If the sleep() is long it will be a long wait before it breaks out of the loop. Check out https://stackoverflow.com/a/46346184/418819 for a solution that will exit immediately. – Steve Sep 24 '19 at 19:55
  • I think it needs to be `GracefulKiller.kill_now = True` and not `self.kill_now = True` – M Y Jan 12 '21 at 10:45
  • 1
    Not too sure if I am doing it right but following this code and pressing `CTRL` + `C` gives `TypeError: exit_gracefully() takes 1 positional argument but 3 were given`. I solved it by changing `def exit_gracefully(self)` to `def exit_gracefully(self, sig, frame)` per https://stackoverflow.com/a/1112350/5305519 – user5305519 Jun 01 '21 at 06:10
  • I think there is a risk of changing signal handlers from default to another handler. isn't it necessary to set back the previous default handlers for those signals? – Zar Ashouri Dec 19 '21 at 09:05
  • NOT WORKING ON WINDOWS ! – Sion C May 17 '23 at 18:55
87

First, I'm not certain that you need a second thread to set the shutdown_flag.
Why not set it directly in the SIGTERM handler?

An alternative is to raise an exception from the SIGTERM handler, which will be propagated up the stack. Assuming you've got proper exception handling (e.g. with with/contextmanager and try: ... finally: blocks) this should be a fairly graceful shutdown, similar to if you were to Ctrl+C your program.

Example program signals-test.py:

#!/usr/bin/python

from time import sleep
import signal
import sys


def sigterm_handler(_signo, _stack_frame):
    # Raises SystemExit(0):
    sys.exit(0)

if sys.argv[1] == "handle_signal":
    signal.signal(signal.SIGTERM, sigterm_handler)

try:
    print "Hello"
    i = 0
    while True:
        i += 1
        print "Iteration #%i" % i
        sleep(1)
finally:
    print "Goodbye"

Now see the Ctrl+C behaviour:

$ ./signals-test.py default
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
^CGoodbye
Traceback (most recent call last):
  File "./signals-test.py", line 21, in <module>
    sleep(1)
KeyboardInterrupt
$ echo $?
1

This time I send it SIGTERM after 4 iterations with kill $(ps aux | grep signals-test | awk '/python/ {print $2}'):

$ ./signals-test.py default
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Terminated
$ echo $?
143

This time I enable my custom SIGTERM handler and send it SIGTERM:

$ ./signals-test.py handle_signal
Hello
Iteration #1
Iteration #2
Iteration #3
Iteration #4
Goodbye
$ echo $?
0
SherylHohman
  • 16,580
  • 17
  • 88
  • 94
Will Manley
  • 2,340
  • 22
  • 17
  • 4
    "Why not set it directly in the SIGTERM handler" --- because the worker thread would interrupt on a random place. If you put multiple statements into your worker loop you will see that your solution terminates a worker on a random position, which leaves the job in an unknown state. – zerkms Jul 04 '14 at 22:10
  • Works well for me, also in a Docker context. Thanks! – Marian May 15 '15 at 07:42
  • 5
    If you just set a flag and not raise exception then it will be the same as with thread. So using thread is superfluous here. – Suor May 26 '15 at 04:31
51

Here is a simple example without threads or classes.

import signal

run = True

def handler_stop_signals(signum, frame):
    global run
    run = False

signal.signal(signal.SIGINT, handler_stop_signals)
signal.signal(signal.SIGTERM, handler_stop_signals)

while run:
    pass # do stuff including other IO stuff
thoughtarray
  • 676
  • 5
  • 5
31

I think you are near to a possible solution.

Execute mainloop in a separate thread and extend it with the property shutdown_flag. The signal can be caught with signal.signal(signal.SIGTERM, handler) in the main thread (not in a separate thread). The signal handler should set shutdown_flag to True and wait for the thread to end with thread.join()

mmr
  • 14,781
  • 29
  • 95
  • 145
moliware
  • 10,160
  • 3
  • 37
  • 47
  • 4
    Yep, a separated thread is how I've finally solved it, thanks – zerkms Aug 29 '13 at 00:19
  • 7
    Threads are not required here. In a single threaded program itself, you can first register a signal handler (registering a signal handler is non blocking) and then write mainloop. Signal handler function should set a flag when and loop should check for this flag. I have pasted a class based solution for the same [here](http://stackoverflow.com/a/31464349/578989). – Mayank Jaiswal Jul 16 '15 at 21:01
  • 2
    No way that having a second thread is necessary. Register signal handler. –  Aug 31 '16 at 22:11
  • helpful page: https://www.g-loaded.eu/2016/11/24/how-to-terminate-running-python-threads-using-signals/ – Kamil Sindi May 25 '17 at 19:07
18

Based on the previous answers, I have created a context manager which protects from sigint and sigterm.

import logging
import signal
import sys


class TerminateProtected:
    """ Protect a piece of code from being killed by SIGINT or SIGTERM.
    It can still be killed by a force kill.

    Example:
        with TerminateProtected():
            run_func_1()
            run_func_2()

    Both functions will be executed even if a sigterm or sigkill has been received.
    """
    killed = False

    def _handler(self, signum, frame):
        logging.error("Received SIGINT or SIGTERM! Finishing this block, then exiting.")
        self.killed = True

    def __enter__(self):
        self.old_sigint = signal.signal(signal.SIGINT, self._handler)
        self.old_sigterm = signal.signal(signal.SIGTERM, self._handler)

    def __exit__(self, type, value, traceback):
        if self.killed:
            sys.exit(0)
        signal.signal(signal.SIGINT, self.old_sigint)
        signal.signal(signal.SIGTERM, self.old_sigterm)


if __name__ == '__main__':
    print("Try pressing ctrl+c while the sleep is running!")
    from time import sleep
    with TerminateProtected():
        sleep(10)
        print("Finished anyway!")
    print("This only prints if there was no sigint or sigterm")
Okke
  • 891
  • 10
  • 13
  • It seems it won't support nested `with TerminateProtected():` statements. – Vyacheslav Napadovsky Feb 17 '21 at 18:47
  • @VyacheslavNapadovsky, it retains the previous handlers on entry and restores them on (context) exit, so it seems okay to me? – Alex Peters Apr 05 '21 at 07:55
  • @AlexPeters, yes, but you don't call the original handler when event occurs, thus any handlers will get ignored. And with addition to that that we have exception handling mechanism that may catch and ignore exception thrown by sys.exit(0), nested `with` statements with exception handlers in-between will lead to ignoring the termination request. (And make you hit ctrl+C) several times at least. – Vyacheslav Napadovsky Apr 06 '21 at 14:37
  • could something like this work with multiprocessing? – Ximi Jun 27 '22 at 14:16
6

Found easiest way for me. Here an example with fork for clarity that this way is useful for flow control.

import signal
import time
import sys
import os

def handle_exit(sig, frame):
    raise(SystemExit)

def main():
    time.sleep(120)

signal.signal(signal.SIGTERM, handle_exit)

p = os.fork()
if p == 0:
    main()
    os._exit()

try:
    os.waitpid(p, 0)
except (KeyboardInterrupt, SystemExit):
    print('exit handled')
    os.kill(p, signal.SIGTERM)
    os.waitpid(p, 0)
Flair
  • 2,609
  • 1
  • 29
  • 41
Kron
  • 1,968
  • 1
  • 15
  • 15
1

The simplest solution I have found, taking inspiration by responses above is

class SignalHandler:

    def __init__(self):

        # register signal handlers
        signal.signal(signal.SIGINT, self.exit_gracefully)
        signal.signal(signal.SIGTERM, self.exit_gracefully)

        self.logger = Logger(level=ERROR)

    def exit_gracefully(self, signum, frame):
        self.logger.info('captured signal %d' % signum)
        traceback.print_stack(frame)

        ###### do your resources clean up here! ####

        raise(SystemExit)
loretoparisi
  • 15,724
  • 11
  • 102
  • 146
  • 1
    You generally can not do resource cleanup in a signal handler, since it can't know what the program was doing when receiving the signal. Sure it have the stacktrace, but that is almost never enough to do something useful. – Mattias Wallin Sep 06 '21 at 06:23
  • In my case the cleanup is related to running program code, so I exactly know which resources to safely close (like db connections, pending sockets and IO, etc.) while regarding external processes, just in my case, I have piped programs (opened to input on stdin) that I can even close at signal handler because I have a reference to each of them. But of course this will only work with this approach or similar. – loretoparisi Sep 06 '21 at 06:42
  • How do `exit_gracefully()` know which DB connections are safe to close? How do it wait for it to be safe? – Mattias Wallin Sep 06 '21 at 10:25
  • yes this is application logic specific, if you have good object wrappers, you can do it, but of course is application specific, and - my two cents - there is no general solution! thanks for your comments. – loretoparisi Sep 06 '21 at 13:43
1

Similar to thoughtarray's answer but using asyncio:

loop = asyncio.get_event_loop()

def handle_signal(signum: int, frame: FrameType):
    loop.stop()
    # ...
    # additional steps to gracefully handle sigterm
    # ...
    sys.exit(signum)

signal.signal(signal.SIGINT, handle_signal)
signal.signal(signal.SIGTERM, handle_signal)
loop.run_forever()

This assumes that tasks are scheduled on the event loop acquired by get_event_loop and will stop the loop on sigterm

Emptyless
  • 2,964
  • 3
  • 20
  • 30
0

Sample of my code how I use signal:

#! /usr/bin/env python

import signal


def ctrl_handler(signum, frm):
    print "You can't cannot kill me"


print "Installing signal handler..."
signal.signal(signal.SIGINT, ctrl_handler)
print "done"

while True:
    # do something
    pass
BenT
  • 3,172
  • 3
  • 18
  • 38
Parth
  • 64
  • 1
  • 5
  • The question was how to gracefully quit on SIGTERM. Your example shows how to ignore SIGINT. You can change to ignore SIGTERM, but it is a bad idea for a daemon to ignore it, because it will then instead be SIGKILL:ed after a timeout, and that can't be ignored or gracefully handled at all. – Mattias Wallin Sep 06 '21 at 06:04
0

You can set a threading.Event when catching the signal.

threading.Event is threadsafe to use and pass around, can be waited on, and the same event can be set and cleared from other places.

import signal, threading

quit_event = threading.Event()
signal.signal(signal.SIGTERM, lambda *_args: quit_event.set())

while not quit_event.is_set():
    print("Working...")
Mattias Wallin
  • 1,418
  • 1
  • 10
  • 8