-1

I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.

How can I either:

  1. Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
  2. Access the host docker daemon in a consistent way regardless of the host OS?

If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).

Update w/ Solution Detail

For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.

For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.

I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.

The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.

Tom
  • 1,179
  • 12
  • 28
  • Mapping `/var/run/docker.sock` to `/var/run/docker.sock` works for me on all host platforms – Software Engineer Mar 29 '20 at 14:34
  • Windows doesn't have any concept of /var/run/docker.sock unless there's something new I don't know about. – Tom Mar 29 '20 at 15:08
  • I did exactly this in wsl2 (ubuntu) with docker-desktop just a few minutes ago and it works fine. I also tried it with the standard windows cli (cmd) and it works there too. I guess there's something you don't know :) – Software Engineer Mar 29 '20 at 15:08
  • Unfortunately these machines won't have access to wsl2 yet, does the technique work for wsl(1)? – Tom Mar 29 '20 at 15:11
  • Also, with /var/run/docker.sock mapped in, what would the uri for the api call look like to that? – Tom Mar 29 '20 at 15:12
  • I don't really know actually, and I can't downgrade my system to test it. But it's easy enough to test -- spin up a container with docker installed, use `-v /var/run/docker.sock:/var/run/docker.sock` and inside the container run docker ps. If it works then you have access to the socket. – Software Engineer Mar 29 '20 at 15:13
  • My example maps this to /var/run/docker.sock which is the standard location for docker (which is what you should use with any linux container to avoid confusion) – Software Engineer Mar 29 '20 at 15:14
  • I'm looking into re-working things to fully try this but it does seem like it will work. @Software-engineer if you want to make an "answer" to this question instead of a comment I can then accept it as the answer once I think things re-worked. (probably not until tomorrow sometime.) – Tom Mar 29 '20 at 17:32

1 Answers1

0

As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.

Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,

docker container run -v /var/run/docker.sock:/var/run/docker.sock

Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).

Software Engineer
  • 15,457
  • 7
  • 74
  • 102
  • So I have made some progress with this but ran into a pretty big roadblock, unfortunately. Because this is only mapping the socket in, I can not hit "http://:2375" to utilize docker's http api from C# which is what I ultimately need to do :( – Tom Mar 30 '20 at 23:13
  • I'd love to get into the million reasons why C# is such a bad idea for what you're doing (or for anything at all), but have you thought about using bash instead and just using the docker client? And, read this: https://stackoverflow.com/questions/37178824/how-do-i-find-the-docker-rest-api-url – Software Engineer Mar 31 '20 at 13:27
  • We have an aspnetcore (c#) web api that gets a zip file uploaded to it. One of the things in the zip is an exported docker image. That image then gets loaded to the docker instance running on the host, re-tagged for our private repository, and then pushed. After that a message gets placed on a message queue and a separate python application (also running in container) deploys a docker swarm service with that image. Key point that makes this all very tricky is this has to run 100% offline with no connection to internet. Works great on windows/mac. – Tom Mar 31 '20 at 14:35
  • You could try the `https://:2375` uri, and if you get a timeout try hitting the Unix socket (`unix:///var/run/docker.sock`) instead? – Software Engineer Mar 31 '20 at 16:57
  • btw, I've just tried, from inside an alpine container, to curl the unix socket and it worked (`curl --unix-socket /var/run/docker.sock http:/containers/json`), and I tried the host.docker.internal dns method (`curl host.docker.internal:2375/v1.24/containers/json`) from wsl2/ubuntu windows10-docker-desktop and that worked too. – Software Engineer Mar 31 '20 at 17:06
  • Another approach would be to pass the zip from your crippled container (the c# one) to one running Linux that can simply access the Unix socket all the time. You could do that most simply by placing the file on a docker volume and starting a Linux based container sharing the same volume to do the work (it would have to be running occasionally poling for files). In that container, you could have a simple bash script that tags the image and pushes it to your registry. – Software Engineer Mar 31 '20 at 17:09
  • Yes, I knew I could curl but I would prefer not to execute out to the shell from the web application. So I thought I found a workaround by running code to obtain the default gateway which would route to the host and this works as long as the network the container is connected to is of type "bridge" but in this case it's "overlay" so that doesn't work. Docker version 20.04 is adding "host.docker.internal" to linux so I may be stuck until then. Still thinking through it though. Your other container idea is a possibility.. will report back. – Tom Mar 31 '20 at 21:17
  • So I believe I have a solution but need to work through one more gotcha. As part of my docker-compose I am already using nginx as a reverse proxy that is connected to the overlay network. I am binding docker.sock into that nginx container and reverse proxying http to that socket. When I bind /var/run/docker.sock into the container, the container doesn't have permission to use it by default. I can docker exec in and fix it but still working through how to have it work immediately after docker-compose up. – Tom Mar 31 '20 at 23:30
  • Ho do you fix it when you exec in? Also, DiD is an old idea and I think it effectively works out the same as using the docker cli in a container and sharing the socket as documented here. – Software Engineer Apr 01 '20 at 12:00
  • I fix it by either chmod 777 /var/run/docker.sock or I also saw another post that talked about creating the docker group with gid 999 and adding root to that group. Since I am building the nginx container from a dockerfile so I can copy in my own nginx.conf I should be able to do the groupadd stuff at build. Will try that later when I get to work. You definitely have lead me down the right path with /var/run/docker.sock so thanks for that. Getting close. – Tom Apr 01 '20 at 12:45
  • Because you're mapping the docker socket into the container, any change to the access rights will apply to the host's docker socket, so you should't do that. Better to run the container with the right access rights - the equivalent of: `docker run -u 1000:1000` which will set the user and group id's. – Software Engineer Apr 01 '20 at 13:04
  • Thanks for your help! I got everything working and the key to get there was the mapping of /var/run/docker.sock as you initially suggested. I've updated the question with more details about the problems and ultimate solution despite my situation most likely being a very unique use case. – Tom Apr 02 '20 at 18:37