2

For our application running inside a container it is preferable that it receives a SIGTERM when the container is being (gracefully) shutdown. At the same time, we want it's output to go to a log file.

In the startscript of our docker container, we had therefore been using bash's exec similar to this

exec command someParam >> stdout.log

That worked just fine, command replaced the shell that had been the container's root process and would receive the SIGTERM.

Since the application tends to log a lot, we decided to add log rotation by using Apache's rotatelogs tool, i.e.

exec command | rotatelogs -n 10 stdout.log 10M

Alas, it seems that by using the pipe, exec can no longer have command replace the shell. When looking at the processes in the running container with pstree -p, it now looks like this

mycontainer@/#pstree -p
start.sh(1)-+-command(118)
            `-rotatelogs(119)

So bash remains the root process, and does not pass the SIGTERM on to command. Before stumbling upon exec, I had found an approach that installs a signal handler into the bash script, which would then itself send a SIGTERM to the command process using kill. However, this became really convoluted, getting the PID was also not always straightforward, and I would like to preserve the convenience of exec when it comes to signal handling and get piping for log rotation. Any idea how to accomplish this?

oguz ismail
  • 1
  • 16
  • 47
  • 69
PalatinateJ
  • 183
  • 1
  • 1
  • 10
  • Can you do the log management outside of Docker: have the process write to its stdout as normal, and either `docker run > container.log` or [configure an alternate Docker logging system](https://docs.docker.com/config/containers/logging/configure/)? That avoids this problem and the need to have log-management tools inside the container (and avoids the question of which filesystem is getting the collected logs). – David Maze Aug 13 '20 at 15:59
  • @DavidMaze We are actually putting all logs of all our containers in a folder that is a network share to allow easy collecting of all logs on customer clusters without a sophisticated log handling stack (EFK or whatever). The applications write a lot of other log files (out of historic reasons), so we need that anyway. We often either only get those files (and nothing on stdout/err) or only stdout/err. I should mention that we mainly focus on K8s these days. – PalatinateJ Aug 13 '20 at 16:24

2 Answers2

0

Perhaps you want

exec sh -c 'command | rotatelogs -n 10 stdout.log 10M'
glenn jackman
  • 238,783
  • 38
  • 220
  • 352
  • 1
    Hi @glenn, I gave this a try (actually using "bash -c" instead of "sh -c"... but it is giving me the same pstree with non-working signal handling as without, i.e., bash being root with command and rotatelogs as its children. – PalatinateJ Aug 13 '20 at 16:26
0

I was able to get around this by using process substitution. For your specific case the following may work.

exec command > >(rotatelogs -n 10 stdout.log 10M)

To reproduce the scenario I built this simple Dockerfile

FROM perl
SHELL ["/bin/bash", "-c"]

# The following will gracefully terminate upon docker stop
CMD exec perl -e '$SIG{TERM} = sub { $|++; print "Caught a sigterm!\n"; sleep(5); die "is the end!" }; sleep(30);' 2>&1 > >(tee /my_log)

# The following won't gracefully terminate upon docker stop
#CMD exec perl -e '$SIG{TERM} = sub { $|++; print "Caught a sigterm!\n"; sleep(5); die "is the end!" }; sleep(30);' 2>&1 | tee /my_log

Build docker build -f Dockerfile.meu -t test .

Run docker run --name test --rm -ti test

Stop it docker stop test

Output:

Caught a sigterm!
is the end! at -e line 1.
rodvlopes
  • 895
  • 1
  • 8
  • 18