6

With this image (the Caddy webserver):

docker run --rm -p 80:80 caddy:latest 

I can stop it with sending CTRL+C within the terminal.

Some other images won't work this way, for example MariaDB:

docker run -it --rm -p 3306:3306 -e MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=true mariadb:latest

I'm not able to stop it with CTRL+C

I've noticed that in Caddy Dockerfile there is no ENTRYPOINT but only CMD, whereas MariaDB has both an ENTRYPOINT and a CMD.

Why? Is there any reason NOT to support the handling of a kill signal? How can one support the SIGTERM in a docker entrypoint, if that is the reason?

gremo
  • 47,186
  • 75
  • 257
  • 421
  • Is this relevant? https://stackoverflow.com/questions/20602675/trapping-signal-from-docker-stop-in-bash – doneforaiur Jun 14 '23 at 16:44
  • @doneforaiur could help, thanks, but I'm a bit curious about how the official images solves the problem.... and why it happens, technically – gremo Jun 14 '23 at 16:46
  • 1
    So am I! I was curious about this too, I'l stick around. :^) – doneforaiur Jun 14 '23 at 16:47
  • It may also have to do with the fact how the process running in the container handles the `SIGINT` signal. Ie the container only stops once the foreground process in the container stops. If this process handles SIGINT without exiting, also the container will continue to run. – derpirscher Jun 14 '23 at 16:55
  • SIGINT is not SIGKILL. Only a KILL signal is a kill signal. SIGINT and SIGTERM are both able to be handled: software is allowed to clean up gracefully before exiting. Sometimes _badly-behaved_ software doesn't handle such signals at all, but when that's the case it's reasonable to call it a bug in the specific tool. Throwing out the ability to do graceful shutdown (make sure your logs are all written, your in-memory data is flushed to disk, &c) because some programs abuse the privilege to define their own signal handlers would be tossing the baby with the bathwater. – Charles Duffy Jun 14 '23 at 17:46
  • BTW, cue [normal grumbling about "why" questions as opposed to practical ones](https://meta.stackexchange.com/questions/170394/what-is-the-rationale-for-closing-why-questions-on-a-language-design) (where "practical" is strictly defined as questions whose answer is expected to change _how you go about the practice of writing software_); "rants in disguise" have been explicitly off-topic since the site's initial founding. – Charles Duffy Jun 14 '23 at 17:49
  • (It's legitimate for things that get a SIGINT or a SIGTERM to sometimes need to wait a long while: occasionally you have serious I/O contention that can mean blocking for a minute or two before writes can be flushed to disk; that's no excuse for a database to corrupt itself or throw away diagnostics explaining why it's exiting, which is why tools automatically escalating to SIGKILL when a TERM isn't handled in a second or two is such an extremely bad idea) – Charles Duffy Jun 14 '23 at 17:51

2 Answers2

13

but I'm a bit curious about how the official images solves the problem....

Don't forget that Docker 1.13 comes with tini, originally from krallin/tini.

Any image run with docker run --init would include an init inside the container that forwards signals and reaps processes.
As mentioned here, tini works transparently, Dockerfiles don't need to be modified in any way.

So, In Unix systems, when you press CTRL+C in a terminal, a SIGINT (Signal Interrupt) is sent to the foreground process group, which in this case is the Docker container. If you use CTRL+\, it sends SIGQUIT, and if you use CTRL+Z, it sends SIGTSTP (Signal Stop).

Docker containers run a single main process, and this process runs in PID 1.
PID 1 is special on Linux: it is the first process that runs and is the ancestor of all other processes.
It also has a special relationship with Unix signals: it is the only process that can choose to ignore SIGINT and SIGTERM. Other processes cannot ignore these signals, but they can have handlers that execute when they receive them.

In Docker, when you use CMD to specify the process to run, Docker will wrap that process with a small init system (like tini), which properly handles Unix signals and forwards them to the main process (I mentioned it originally here).
This is why your caddy image, which does not have an ENTRYPOINT and only has CMD, can handle the CTRL+C.

However, when you use ENTRYPOINT, Docker does not wrap the process with this small init system, and the process runs as PID 1.

If the process does not have built-in handling for SIGINT or SIGTERM, it will not respond to CTRL+C. This is why your mariadb image, which has an ENTRYPOINT, does not stop when you press CTRL+C.

The handling of a kill signal is an important part of a graceful shutdown. When a process receives a SIGTERM (or SIGINT), it should stop accepting new work, finish its current work, and then exit. This allows it to clean up any resources it is using and ensure data consistency.

If the process does not handle these signals and is simply killed (with SIGKILL), it does not have a chance to clean up and may leave resources in an inconsistent state. This could be harmful in the case of a database like MariaDB, which could have uncommitted transactions or partially written data.

To support the handling of a kill signal in a Docker entrypoint, you can:

  1. Add signal handling to the entrypoint script itself. This requires modifying the script and might not be feasible if the entrypoint is a binary.

  2. Use an init system that can handle signals and forward them to the main process. There are small init systems like tini that are designed for this purpose. You can use them in your Dockerfile like this:

    FROM your-base-image
    RUN apt-get update && apt-get install -y tini
    ENTRYPOINT ["/usr/bin/tini", "--", "your-command"]
    

    This will run tini as PID 1, which will handle signals and forward them to your command.

  3. Use Docker's built-in init, which is a minimal tini implementation. You can enable it with the --init option when you run your container:

    docker run --init -it --rm -p 3306:3306 -e MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=true mariadb:latest
    

    This will run Docker's built-in init as PID 1, which will handle signals and forward them to MariaDB.

These methods ensure that your Docker entrypoint can handle kill signals and perform a graceful shutdown when necessary.


Is there any reason NOT to support the handling of a kill signal?

Not sure, unless you want your image to be really:

  • simple, for a very simple or short-lived processes, with no need of signal handling
  • fast, to avoid any shutdown signals during critical sections of their code
  • resilient, when you want to prevent unwanted or unauthorized shutdown.

But generally, ignoring termination signals can lead to issues like data loss, resource leakage, or zombie processes, and should be avoided.


The only thing left is: if not supporting the shutdown signal is somehow dangerous for service like MariaDB, why MariaDB itself is not supporting it directly in the entrypoint?

This could be for a legacy reason, inherited from MySQL, which MariaDB is a fork of. MySQL was developed before Docker existed, and in a traditional server environment, it's less common for a process to receive a SIGINT. It's more common for a process to receive a SIGTERM when the system is shutting down, which MySQL and MariaDB do handle.
Its entrypoint reflects that, and you run it with --init to ensure a basic SIGINT support.

But keep in mind it is not the only issue. I made the mistake of running a MariaDB instance on an Azure ACI (container) instead of AKS (Kubernetes).
When an ACI closes (at least when I used it in 2021), it sends... a SIGKILL (a signal which cannot be caught, blocked, or ignored). When the kernel sends a SIGKILL to a process, that process is immediately terminated, and it doesn't get a chance to clean up or do any other work before it's killed.

So, independently of whether your image supports graceful shutdown, be mindful of your execution environment, which might not allow any graceful shutdown in the first place.

VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • This is a great explanation! Thanks! The only thing left is: if not supporting the shutdown signal is somehow dangerous for service like mariadb, why mariadb itself is not supporting it directly in the entrypoint? – gremo Jun 15 '23 at 07:24
  • @gremo I have edited the answer to address your comment. – VonC Jun 15 '23 at 08:14
4

MariaDB ignores SIGINT by masking its delivery to the process using sigprocmask.

This was done a long time ago on some assumptions maybe around accidental triggering on shutdown when started in a terminal. There might be a case in old-school sysv scripts where a closed terminal quickly after service mariadb start may have delivered a SIGINT to mariadbd (unverified). I can't see any requests to remove the SIGINT masking.

In the container the entrypoint script execs the mariadbd server executable after a basic setup, so the behaviour of ignoring SIGINT is what was historically there rather than an explicit decision.

The --gdb option on the command line (for server or container) will prevent the ignoring of SIGINT and achieve the result of being about to SIGINT to terminate MariaDB safely. It can also be provided in a .cnf file.

example:

$  podman run --env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=1 --rm mariadb:10.6 --gdb

.
2023-06-18 22:42:49 0 [Note] mariadbd: ready for connections.
Version: '10.6.14-MariaDB-1:10.6.14+maria~ubu2004'  socket: '/run/mysqld/mysqld.sock'  port: 0  mariadb.org binary distribution

explicit SIGINT, though ctrl-C on command line also works.

$ podman kill --signal SIGINT mariadbsiginttest 
mariadbsiginttest
2023-06-18 22:44:31 0 [Note] mariadbd (initiated by: unknown): Normal shutdown
2023-06-18 22:44:31 0 [Note] InnoDB: FTS optimize thread exiting.
2023-06-18 22:44:31 0 [Note] InnoDB: Starting shutdown...
2023-06-18 22:44:31 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2023-06-18 22:44:31 0 [Note] InnoDB: Buffer pool(s) dump completed at 230618 22:44:31
2023-06-18 22:44:31 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1"
2023-06-18 22:44:31 0 [Note] InnoDB: Shutdown completed; log sequence number 42294; transaction id 15
2023-06-18 22:44:31 0 [Note] mariadbd: Shutdown complete

MariaDB can handle termination at any point, including SIGKILL (cannot be caught except by an init process) or power failures. This is part of the durability (the D in ACID) for transactional databases. Upon restart uncommitted transactions are cleaned up. Doing a graceful termination does mean the startup doesn't need a crash recovery and its quicker however.

danblack
  • 12,130
  • 2
  • 22
  • 41