I have compose file locally. How to run bundle of containers on remote host like docker-compose up -d
with DOCKER_HOST=<some ip>
?

- 509
- 1
- 5
- 8
-
Your question is unclear for me. Do you actually want to deploy a container on a remote host through `docker-compose`? – Auzias Feb 16 '16 at 13:45
-
yes, like a https://docs.docker.com/compose/production/ But i want to do it without docker-machine – Arsen Feb 16 '16 at 14:20
-
1Check out this article: https://developer.rackspace.com/blog/dev-to-deploy-with-docker-machine-and-compose/ – JAM Feb 16 '16 at 19:10
-
I don't understand the question. `DOCKER_HOST=... docker-compose up -d` should work – dnephin Feb 17 '16 at 23:38
6 Answers
After the release of Docker 18.09.0 and the (as of now) upcoming docker-compose v1.23.1
release this will get a whole lot easier. This mentioned Docker release added support for the ssh
protocol to the DOCKER_HOST
environment variable and the -H
argument to docker ...
commands respectively. The next docker-compose
release will incorporate this feature as well.
First of all, you'll need SSH access to the target machine (which you'll probably need with any approach).
Then, either:
# Re-direct to remote environment.
export DOCKER_HOST="ssh://my-user@remote-host"
# Run your docker-compose commands.
docker-compose pull
docker-compose down
docker-compose up
# All docker-compose commands here will be run on remote-host.
# Switch back to your local environment.
unset DOCKER_HOST
Or, if you prefer, all in one go for one command only:
docker-compose -H "ssh://my-user@remote-host" up
One great thing about this is that all your local environment variables that you might use in your docker-compose.yml
file for configuration are available without having to transfer them over to remote-host
in some way.

- 9,381
- 17
- 70
- 98
-
2Would I need to have the compose on the remote server or does it do all the set up for me on there? – Kode Mar 04 '19 at 21:49
-
3As far as I know, you will not need `docker-compose` on the remote server. `docker-compose` is just a wrapper for running various regular `docker` commands on your behalf. When running `docker-compose -H "ssh://my-user@remote-host" up`, this would translate on the controlling machine into something along the lines of `docker -H "ssh://my-user@remote-host" run --name foo some-image ...`, so no `docker-compose` commands will be run on the remote machine. – Dirk Mar 05 '19 at 09:28
-
You can now use docker contexts for this:
docker context create dev ‐‐docker “host=ssh://user@remotemachine”
docker-compose ‐‐context dev up -d
More info here: https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/

- 645
- 1
- 7
- 19
-
1
-
How can we connect using `docker context` if server need key authentication? How to provide key ? – Muhammad Tariq May 14 '21 at 20:03
-
@MuhammadTariq in that case you should make `.ssh/config` file where you specify username, host, port, key location etc and then create a context that uses it `--docker "host=ssh://myhost"`. Be sure to not make my mistake: I run Docker with `sudo` while my ssh config belonged to user folder `~/.ssh` and so Docker was unable to resolve the host I was providing. – Darkzarich Feb 13 '23 at 00:00
If you don't need to run docker container on your local machine, but still on the same remote machine, you can change this in your docker setting.
On the local machine: You can control remote host with -H parameter
docker -H tcp://remote:2375 pull ubuntu
To use it with docker-compose, you should add this parameter in /etc/default/docker
On the remote machine
You should change listen from external adress and not only unix socket.
See Bind Docker to another host/port or a Unix socket for more details.
If you need to run container on multiple remote hoste, you should configure Docker Swarm
-
4Keep in mind that by default this allows anybody in the network to access that machine's Docker daemon. Since access to the Docker daemon is kind of similar to `sudo` rights on that machine, you should only do this in a trusted environment. There are approaches to secure or limit the access to trusted clients only which is explained here or in several blog posts on the topic. It is more of an advanced topic though. https://docs.docker.com/engine/security/https/#create-a-ca-server-and-client-keys-with-openssl – Dirk Nov 01 '18 at 14:16
From the compose documentation
Compose CLI environment variables
DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock.
so that we can do
export DOCKER_HOST=tcp://192.168.1.2:2375
docker-compose up

- 2,768
- 2
- 30
- 44
-
This only works if you configured the Docker daemon on the target machine to allow remote access via TCP as explained e.g. here: https://success.docker.com/article/how-do-i-enable-the-remote-api-for-dockerd Also keep the security concerns in mind which I outlined in my comment to @Thibaut's answer. – Dirk Nov 01 '18 at 14:18
Yet another possibility I discovered recently is controlling a remote Docker Unix socket via an SSH tunnel (credits to https://medium.com/@dperny/forwarding-the-docker-socket-over-ssh-e6567cfab160 where I learned about this approach).
Prerequisite
You are able to SSH into the target machine. Passwordless, key based access is preferred for security and convenience, you can learn how to set this up e.g. here: https://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
Besides, some sources mention forwarding Unix sockets via SSH tunnels is only available starting from OpenSSH
v6.7 (run ssh -V
to check), I did not try this out on older versions though.
SSH Tunnel
Now, create a new SSH tunnel between a local location and the Docker Unix socket on the remote machine:
ssh -nNT -L $(pwd)/docker.sock:/var/run/docker.sock user@someremote
Alternatively, it is also possible to bind to a local port instead of a file location. Make sure the port is open for connections and not already in use.
ssh -nNT -L localhost:2377:/var/run/docker.sock user@someremote
Re-direct Docker Client
Leave the terminal open and open a second one. In there, make your Docker client talk to the newly created tunnel-socket instead of your local Unix Docker socket.
If you bound to a file location:
export DOCKER_HOST=unix://$(pwd)/docker.sock
If you bound to a local port (example port as used above):
export DOCKER_HOST=localhost:2377
Now, run some Docker commands like docker ps
or start a container, pull an image etc. Everything will happen on the remote machine as long as the SSH tunnel is active. In order to run local Docker commands again:
- Close the tunnel by hitting
Ctrl+C
in the first terminal. - If you bound to a file location: Remove the temporary tunnel socket again. Otherwise you will not be able to open the same one again later:
rm -f "$(pwd)"/docker.sock
- Make your Docker client talk to your local Unix socket again (which is the default if unset):
unset DOCKER_HOST
The great thing about this is that you save the hassle of copying docker-compose.yml
files and other resources around or setting environment variables on a remote machine (which is difficult).
Non-interactive SSH Tunnel
If you want to use this in a scripting context where an interactive terminal is not possible, there is a way to open and close the SSH tunnel in the background using the SSH ControlMaster and ControlPath options:
# constants
TEMP_DIR="$(mktemp -d -t someprefix_XXXXXX)"
REMOTE_USER=some_user
REMOTE_HOST=some.host
control_socket="${TEMP_DIR}"/control.sock
local_temp_docker_socket="${TEMP_DIR}"/docker.sock
remote_docker_socket="/var/run/docker.sock"
# open the SSH tunnel in the background - this will not fork
# into the background before the tunnel is established and fail otherwise
ssh -f -n -M -N -T \
-o ExitOnForwardFailure=yes \
-S "${control_socket}" \
-L "${local_temp_docker_socket}":"${remote_docker_socket}" \
"${REMOTE_USER}"@"${REMOTE_HOST}"
# re-direct local Docker engine to the remote socket
export DOCKER_HOST="unix://${local_temp_docker_socket}"
# do some business on remote host
docker ps -a
# close the tunnel and clean up
ssh -S "${control_socket}" -O exit "${REMOTE_HOST}"
rm -f "${local_temp_docker_socket}" "${control_socket}"
unset DOCKER_HOST
# do business on localhost again

- 9,381
- 17
- 70
- 98
Given that you are able to log in on the remote machine, another approach to running docker-compose
commands on that machine is to use SSH
.
Copy your docker-compose.yml
file over to the remote host via scp
, run the docker-compose
commands over SSH
, finally clean up by removing the file again.
This could look as follows:
scp ./docker-compose.yml SomeUser@RemoteHost:/tmp/docker-compose.yml
ssh SomeUser@RemoteHost "docker-compose -f /tmp/docker-compose.yml up"
ssh SomeUser@RemoteHost "rm -f /tmp/docker-compose.yml"
You could even make it shorter and omit the sending and removing of the docker-compose.yml
file by using the -f -
option to docker-compose
which will expect the docker-compose.yml
file to be piped from stdin
. Just pipe its content to the SSH
command:
cat docker-compose.yml | ssh SomeUser@RemoteHost "docker-compose -f - up"
If you use environment variable substitution in your docker-compose.yml
file, the above-mentioned command will not replace them with your local values on the remote host and your commands might fail due to the variables being unset. To overcome this, the envsubst utility can be used to replace the variables with your local values in memory before piping the content to the SSH
command:
envsubst < docker-compose.yml | ssh SomeUser@RemoteHost "docker-compose up"

- 9,381
- 17
- 70
- 98
-
this is novel... what if the yml file contains images stored in a private registry that requires a login? – Richard Mar 23 '18 at 21:48
-
I guess that in this case you would have to add a login step somewhere in-between. Either interactively with a user/password prompt using `docker login my.docker.registry` or by [providing your user and password on the command line](https://docs.docker.com/engine/reference/commandline/login/) (take care that this stays secure though). – Dirk Mar 28 '18 at 16:07