I am trying to use a docker container to set up a SSH tunnel to a remote database that is only reachable via SSH. I have a docker network with several containers and want to make the database available for all the containers in the network.
The Dockerfile for the SSH container looks like this:
FROM debian:stable
RUN apt-get update && apt-get -y --force-yes install openssh-client autossh postgresql-client
COPY .ssh /root/.ssh
RUN chown root:root /root/.ssh/config
EXPOSE 12345
ENTRYPOINT ["/usr/bin/autossh", "-M", "0", "-v", "-T", "-N", "-4", "-L", "12345:localhost:1234", "user@remotedb" ]
Inside the .ssh diretctory are my keys and the config file, which looks like that:
Host remotedb
StrictHostKeyChecking no
ServerAliveInterval 30
ServerAliveCountMax 3
The tunnel itself works on this container, meaning I can access the db from inside it as localhost:12345.
Now I want to access it also from other containers in the same network.
My docker-compose.yml looks like this (I commented out some trials):
version: '2'
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 10.12.0.0/16
gateway: 10.12.0.1
services:
service_1:
image: my/image:alias
volumes:
- somevolume
# links:
# - my_ssh
ports:
- "8080"
environment:
ENV1: blabla
networks:
my_network:
ipv4_address: 10.12.0.12
my_ssh:
build:
context: ./dir_with_Dockerfile
# ports:
# - "23456:12345"
expose:
- "12345"
networks:
my_network:
ipv4_address: 10.12.0.13
I've tried to access the remote database from inside service_1 with hostnames 'my_ssh', the ipv4_address, 'localhost', and with ports 12345 and 23456. None of these combinations have worked. Where could I go wrong?
Or how else could I achieve a permanent connection from my containers to the remote database?