1

My Objective: I want to be able to restart a container based on the official Python Image using some command inside the container.

My system: I have a own Docker image based on the official python image which look like this:

FROM python:3.6.15-buster
WORKDIR /webserver
COPY requirements.txt /webserver
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip3 install -r requirements.txt --no-binary :all:
COPY . /webserver
ENTRYPOINT ["./start.sh"]

As you can see, the image does not execute a single python file but it executes a script called start.sh, which looks like this:

#!/bin/bash

echo "Starting"
echo "Env: $ENTORNO"

exec python3 "$PATH_ENTORNO""Script1.py" &
exec python3 "$PATH_ENTORNO""Script2.py" &
exec python3 "$PATH_ENTORNO""Script3.py" &

All of this works perfectly, but, I want that if, for example, script 3 fails, the entire container based on this image get restarted.

My approach: I had two ideas about this problem. First, try to execute a reboot command in the python3 script, something like this:

from subprocess import call

[...]

call(["reboot"])

This does not work inside the Python Debian image, because of error:

reboot: command not found

The other approach was to mount the docker.sock inside the container, but the error this time is:

root@MachineName:/var/run# /var/run/docker.sock docker ps
bash: /var/run/docker.sock: Permission denied

I dont know if I'm doing right these two approach, or if anyone has any idea about this but any help will be very appreciated.

Javi Martínez
  • 368
  • 1
  • 4
  • 16

3 Answers3

3

Update

After thinking about it, I realised you could send some signal to the PID 1 (your entrypoint), trap it and use a handler to exit with an appropriate code so that docker will reschedule it.

Here's an MRE:

Dockerfile

FROM python:3.9
WORKDIR /app
COPY ./ /app
ENTRYPOINT ["./start.sh"]

start.sh

#!/usr/bin/env bash
python script.py &

# This traps user defined signal and kills the last command
# (`tail -f /dev/null`) before exiting with code 1.
trap 'kill ${!}; echo "Killed by backgrounded process"; exit 1' USR1

# Launches `tail` in the background and sets this program to wait
# for it to finish, so that it does not block execution
tail -f /dev/null & wait $!

script.py

import os
import signal

# Process 1 will be your entrypoint if you declared it in `exec-form`*
print("Sending signal to stop container")
os.kill(1, signal.SIGUSR1)

*exec form

Testing it

> docker build . -t test
> docker run test
Sending signal to stop container
Killed by backgrounded process
> docker inspect $(docker container ls -n 1 -q) --format='{{.State.ExitCode}}'
1

Original post

I think the safest bet would be to instruct docker to restart your container when there's some failure. Then you'd only have to exit your program with a non-zero code (i.e: run exit 1 from your start.sh) and docker will restart it from scratch.

Option 1: docker run --restart

Related documentation

docker run --restart on-failure <image>

Option 2: Using docker-compose

Version 3

In your docker-compose.yml you can set the restart_policy directive to the service you're interested on restarting. i.e:

version: "3"
services:
  app:
    ...
    restart_policy:
      condition: on-failure
    ...

Version 2

Before version 3, the same policy could be applied with the restart directive, which allows for less configuration.

version: "2"
services:
  app:
    ...
    restart: "on-failure"
    ...
EDG956
  • 797
  • 8
  • 23
  • Yes, i have that policy setted, but the problem is that, in the start.sh there is not exit 1, since there is no way to know if the python script have failed in the start.sh – Javi Martínez Sep 29 '22 at 10:51
  • Then you could ask another question to address that, i.e: how to capture exit code of backgrounded processes. Check this one out: https://stackoverflow.com/q/1570262/8868327 – EDG956 Sep 29 '22 at 10:55
  • I'd avoid mounting the `docker.sock` only for that. Also, you're still left with having to catch the script failure somewhere, which would require modifying your python scripts anyway – EDG956 Sep 29 '22 at 10:56
  • @JaviMartínez I've come up with something that you may find helpful – EDG956 Sep 29 '22 at 14:30
  • Thank youu very very much for this. Your second approach was very convinient, but I think I got a better one (which I will publish here). Anyway, thank you very much for your unestimable help. – Javi Martínez Sep 29 '22 at 14:40
3

Well, in the end the solution was much simpler than I expected.

I started from the base where I mount the docker socket inside the container (I know that this practice is not recommended, but in my case, I know that it does not pose security problems), using the command in docker-compose:

volumes:
  - /var/run/docker.sock:/var/run/docker.sock

Then, it was as simple as using the Docker library for python, which gives a complete SDK through that socket that allowed me to restart the container inside the python script in an ultra-simple way.

import docker

[...]

docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock')
docker_client.containers.get("container_name").restart()
Javi Martínez
  • 368
  • 1
  • 4
  • 16
1

Is there any reason why you are running 3 processes in the same container? As per the microservice architecture basics, only one process should run in a container. So you should run 3 dockers for the 3 scripts. All 3 scripts should have the logic that if one of the 3 dockers is not reachable, then it should get killed.

Abhishek S
  • 546
  • 3
  • 16
  • Yeah, I know that, but the idea is not from me... This is all legacy code so I have to adapt to it. But if I had to start again, I would go for that approach. – Javi Martínez Sep 29 '22 at 14:38