1

TL;DR

How can we setup a docker-compose environment so we can reach a container under multiple, custom defined aliases? (Or any alternative that solves our problem in an other fashion.)

Existing setup

We have two applications (nodejs servers), each behind an HTTP reverse proxy (Nginx), that need to talk to each other. On localhost, configuring this is easy:

  • Add /etc/hosts entries for ServerA and ServerB:
    • 127.0.0.1 server-a.testing
    • 127.0.0.1 server-b.testing
  • Run ServerA on port e.g. 2001 and ServerB on port 2002
  • Configure two virtual hosts, reverse proxying to ServerA and ServerB:

    server {   # Forward all traffic for server-a.testing to localhost:2001
        listen      80;
        server_name server-a.testing;
        location / {
            proxy_pass http://localhost:2001;
        }
    }
    server {   # Forward all traffic for server-b.testing to localhost:2002
        listen      80;
        server_name server-b.testing;
        location / {
            proxy_pass http://localhost:2002;
        }
    }
    

This setup is great for testing: Both applications can communicate each other in a way that is very close to the production environment, e.g. request('https://server-b.testing', fn); and we can test how the HTTP server configuration interacts with our apps (e.g. TLS config, CORS headers, HTTP2 proxying).

Dockerize all the things!

We now want to move this setup to docker and docker-compose. The docker-compose.yaml that would work in theory is this:

nginx:
  build: nginx
  ports:
   - "80:80"
  links:
   - server-a
   - server-b
server-a:
  build: serverA
  ports:
   - "2001:2001"
  links:
   - nginx:server-b.testing
server-b:
  build: serverB
  ports:
   - "2002:2002"
  links:
   - nginx:server-a.testing

So when ServerA addresses http://server-b.testing it actually reaches the Nginx which reverse proxies it to ServerB. Unfortunately, circular dependencies are not possible with links. There are three typical solutions to this problems:

  1. use ambassadors
  2. use nameservers
  3. use the brand new networking (--x-networking).

Neither of these work for us, because, for the virtual hosting to work, we need to be able to address the Nginx container under the name server-a.testing and server-b.testing. What can we do?

(†) Actually it's a little bit more complicated – four components and links – but that shouldn't make any difference to the solution:

  • testClient (-> Nginx) -> ServerA,
  • testClient (-> Nginx) -> ServerB,
  • ServerA (-> Nginx) -> ServerB,
  • testClient (-> Nginx) -> ServerC,
Chris McKinnel
  • 14,694
  • 6
  • 64
  • 67
Perseids
  • 12,584
  • 5
  • 40
  • 64
  • How about a combination of 3 (`--x-networking`) and https://github.com/jwilder/nginx-proxy instead of nginx? You would set the `VIRTUAL_HOST` environment variable on each of `server-a` and `server-b`. If you needed more granular control of the nginx configuration, you could separate out docker-gen and the nginx container. – Andy Shinn Jan 21 '16 at 05:34

1 Answers1

0

Try this:

  1. Link you server-a and server-b container to nginx with --link server-a:server-a --link server-b:server-b
  2. Update nginx conf file with

    location /sa
    proxy_pass http://server-a:2001

    location /sb
    proxy_pass http://server-a:2001

When you link two containers, docker adds "conatiner_name container_ip" to /etc/hosts file of the linking container. So, in this case, server-a and server-b is resolved to their respective container IPs via /etc/hosts file.

And you can access them from http://localhost/sa or http://localhost/sb

VDR
  • 2,663
  • 1
  • 17
  • 13