1

I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.

The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.

I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.

Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.

I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.

I'm somewhat of a noob. Thank you for your help.

Ken Garber
  • 51
  • 7
  • Is it vital that it be `localhost` or could you have `db:port` and `redis:port` for accessing the databases? Then just set the values for `db` and `redis` based on the setup (e.g. [accessing localhost of machine](http://stackoverflow.com/a/24326540/2127492)) – jrbeverly Feb 19 '17 at 20:17
  • 3
    You could make environment variables to hold those addresses. – hya Feb 19 '17 at 20:17
  • @Ken Have you solved your problem? – Salem Feb 24 '17 at 20:45
  • @Salem I went the route hya suggested with storing the IP addresses in an .env file. Thank you! – Ken Garber Feb 24 '17 at 21:13
  • If this answered your question then please mark i as solved – Salem Feb 26 '17 at 17:15

1 Answers1

1

Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:

  • You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...

  • You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT

  • As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable. Supposing that script is wrappen in a command named getip, you could run it like this:

    $ docker run -e DOCKER_HOST=$(getip) ...
    

    and then inside the container use the env var named DOCKER_HOST to connect your services.

Salem
  • 12,808
  • 4
  • 34
  • 54