7

I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.

The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).

The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.

Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.

Krumelur
  • 31,081
  • 7
  • 77
  • 119

2 Answers2

7

This is totally OK, and you're not the only one to do it :-)

Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.

jpetazzo
  • 14,874
  • 3
  • 43
  • 45
  • thats not very good, because it leaves your server open for anyone to run docker commands. The sock file is there for a good reason -- security – Matej Aug 10 '14 at 21:35
  • No, it doesn't open your server. It gives access to the *container* (and everything in it); that is different. If you refer to the second part of my answer: I'm not suggesting to open the API to the public, but to open a management container. This management container would filter the REST API. – jpetazzo Nov 10 '14 at 04:55
2

As this question is still of relevance today, I want to answer with a bit more detail:

It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:

  1. If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
  2. If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.

TLDR;

You could do this and it will work, but then you have to think about security for a bit.

Jan
  • 407
  • 4
  • 16
  • Thank you. A lot of time has passed since I asked the question and these days I believe it is a quite proven pattern to use the socket inside a container, especially for build/deploy containers (which IIRC the question was about). – Krumelur Feb 06 '20 at 21:51
  • Hi Krumelur, when you say, its a proven pattern pattern now to use the socket inside the container(with the root user), does that mean the security concerns what @Jan raised, are mitigated out through some approaches? I'm in a similar requirement and was brainstorming on how to use get it working with a non-root user - mainly worried of the same security concerns being raised. Your feedback would be hugely appreciated. Thanks. – Raj Jul 10 '20 at 12:28