0

I have a helper container and an app container.

The helper container handles mounting of code via git to a shared mount with the app container.

I need for the helper container to check for a package.json or requirements.txt in the cloned code and if one exists to run npm install or pip install -r requirements.txt, storing the dependencies in the shared mount. Thing is the npm command and/or the pip command needs to be run from the app container to keep the helper container as generic and as agnostic as possible.

One solution would be to mount the docker socket to the helper container and run docker exec <command> <app container> but what if I have thousands of such apps on a single host. Will there be issues having hundreds of containers all accessing the docker socket at the same time? And is there a better way to do this? Get commands run on another container?

David Jones
  • 4,766
  • 3
  • 32
  • 45
Jonathan
  • 10,792
  • 5
  • 65
  • 85
  • Your description of the helper container role sounds like it should be the image that you build your app container from. – Matt Aug 31 '16 at 10:32
  • No, it exposes an endpoint that I use as a webhook for gogs internally, which then clones the files to the shared mount. – Jonathan Aug 31 '16 at 11:00
  • ah ok, running tasks in your containers that would normally be in an image build then. It doesn't change your problem, if you were building images you'd still need to trigger a build from the webhook container. You would be running less commands if you had 100's of instances of the same app. – Matt Aug 31 '16 at 11:33
  • See this answer: https://stackoverflow.com/a/63690421/10534470 – eshaan7 Sep 01 '20 at 15:14

2 Answers2

1

Well there is no "container to container" internal communication layer like "ssh". In this regard, the containers are as standalone as 2 different VMs ( beside the network part in general ).

You might go the usual way, install opensshd-server on the "receiving" server, configure it key-based only. You do not need to export the port to the host, just connect to the port using the docker-internal network. Deploy the ssh private key on the 'caller server' and the public key into .ssh/authorized_keys on the 'receiving server' during container start time ( volume mount ) so you do not keep the secrets in the image (build time).

Probably also create a ssh-alias in .ssh/config and also set HostVerify to no, since the containers could be rebuild. Then do

ssh <alias> your-command
Eugen Mayer
  • 8,942
  • 4
  • 33
  • 57
  • This is one way to do it. However it would mean installing openssh and messing around with keys. It does clarify it a bit to think of the containers as vms, if they were vms without a doubt ssh would be my goto. Any info on the cons of mounting the docker socket? – Jonathan Aug 31 '16 at 09:12
  • i heard about the docker-socket way + using `docker exec` but i consider this a sever security flaw. If someone overtakes your 'caller' server, he will be able to access every singe docker container on your host, not only the ones in the current application docker stack. Its a full-access-on-all-docker-container-running - way to much – Eugen Mayer Aug 31 '16 at 09:14
  • Makes sense, but what if the socket is mounted only on the helper container, the running app in the app container has no iteraction with it beyond sharing a mounted folder. Would this be okay? – Jonathan Aug 31 '16 at 09:20
  • if you mount the socket there and the attacker gets access to this docker-container, he can use the socket to access every single container on the host - no matter that you only mounted the socket "in this container" - its the socket of the host for _all containers_ – Eugen Mayer Aug 31 '16 at 09:22
  • Thing is if the attacker hasn't leveraged some bug in the running app container to access the helper container where the socket is mounted, the only other way to access the helper container would be from the host and I imagine if the attacker has access to the host, they could do a lot worse things and would have access to all containers already. – Jonathan Aug 31 '16 at 09:26
  • The point is, having a severe bug in a single container leads to access to all containers - that is no what you want. – Eugen Mayer Aug 31 '16 at 09:27
  • Unless of course they access the host as a user other than root and are unable to privelege escalate then somehow gaining access to the helper container would be their way in to all the other containers. Though if they aren't root then they probably wouldn't be able to run any docker commands. – Jonathan Aug 31 '16 at 09:28
  • True. Thanks. If I can't find another way I'll go with ssh. – Jonathan Aug 31 '16 at 09:30
  • https://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/ – Matt Aug 31 '16 at 11:23
0

Found that better way I was looking for :-) .

Using supervisord and running the xml rpc server enables me to run something like:

supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi supervisorctl -s http://127.0.0.1:9002 -utheuser -pthepassword start uwsgi

In the helper container, this will connect to the rpc server running on port 9002 on the app container and execute a program block that may look something like;

[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]

This is exactly what I needed!

For anyone who finds this you'll probably need your supervisord.conf on your app container to look sth like:

[supervisord]
nodaemon=true

[supervisorctl]

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[inet_http_server]
port=127.0.0.1:9002
username=user
password=password

[program:uwsgi]
directory=/app
command=/usr/sbin/uwsgi --ini /app/app.ini --uid nginx --gid nginx --plugins http,python --limit-as 512
autostart=false
autorestart=unexpected
stdout_logfile=/var/log/uwsgi/stdout.log
stdout_logfile_maxbytes=0
stderr_logfile=/var/log/uwsgi/stderr.log
stderr_logfile_maxbytes=0
exitcodes=0
environment = HOME="/app", USER="nginx"]

You can setup the inet_http_server to listen on a socket. You can link the containers to be able to access them at a hostname.

Jonathan
  • 10,792
  • 5
  • 65
  • 85