5

I have containers with python apps and I need them to automatically start and expose ssh when running them. I know it's against Docker's best practices, but right now I don't have any other solution. I'd be interested to know the best way to automatically run an additionnal service in a docker container anyway.

Since Docker will only start one process, installing sshd isn't enough. There are apparently multiple options to deal with it:

  1. use a process manager like Monit or Supervisor
  2. use the ENTRYPOINT option
  3. append a command (service sshd start, for instance) at the end of /etc/bash.bashrc (see this answer)

Option 1 seems overkill to me. Also I suppose I'll have to run the container with a cmd calling the process manager instead of bash or my python app: not exactly what I want.

I don't know how to use Option 2 for such a case. Should I write a custom script starting sshd and then running the provided command if any ? How should this script look like ?

Option 3 is very straightforward but quite dirty. Also it won't work if I run the container with another command than /bin/bash.

What's the best solution and how to set it up ?

Community
  • 1
  • 1
Anto
  • 6,806
  • 8
  • 43
  • 65

3 Answers3

9

You mention that option 1 seems like overkill. Why is it overkill? Supervisor is very simple to configure and will basically do what you want.

First, write supervisor config files that starts your python app and sshd:

[supervisord]
nodaemon=true

[program:sshd]
command=/usr/sbin/sshd -D

[program:pythonapp]
command=/path/to/python myapp.py -x args etc etc

Call that file supervisord.conf and commit it somewhere in your repo. In your Dockerfile, copy that file to the container as one of the container build steps, expose the ports for SSH and your app (if needed) and set the CMD to start supervisord:

COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]

This is clean and easy to understand. It's how I run multiple processes in a container when needed. It is even suggested in the Docker docs as a nice solution.

Ben Whaley
  • 32,811
  • 7
  • 87
  • 85
  • "Overkill" just because I thought I could come up with something even more simple. But this is apparently the most popular solution right now: thanks for the details ! – Anto Feb 25 '15 at 17:35
  • Plus, you can stop the python app from outside the container using: `docker exec -i -t supervisorctl stop pythonapp' and you can use `start' to start it back up. If the app is killed some other way, Supervisor will restart it. You can test this by killing the python app with `kill' and see that it comes back. – Fran K. Jun 27 '15 at 16:55
1

If you don't want to use a process manager, you can wrap your actual container command inside a shell script and sudo service ssh start, then execute your actual command.

sudo service ssh start
python myapp.py -x args blah blah

This will start up ssh as a daemon, and then your python app will start up after.

aholt
  • 2,829
  • 2
  • 10
  • 13
1

Yes, We can configure the Supervisord for the multi process in a container. If you want to use Openssh-server we can configure the Supervisor like below-:

[supervisord]
nodaemon=true

[program:sshd]
command=/usr/sbin/sshd -D

in supervisord.conf file.

We can add the supervisord.conf file in the docker image update a line in Dockerfile.

RUN apt update && apt install -y supervisor openssh-server    
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]

Reference link-: Gotechnies