I have a docker-compose file that uses the jwilder/nginx-proxy
to provide ssl for the Artifactory image I have running.
Everything works fine from connections outside the compose environment's containers. So my browser can load the Artifactory web app just fine and it's ssl encrypted, and all of the API's work fine from command line tools.
The problem is that if I'm inside one of the containers in the environment, I can ping the other containers in the environment, but if I attempt to load a page from the Artifactory container, I get errors saying the connection is refused.
Here is my compose file:
version: '3'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./assets/certs:/etc/nginx/certs
depends_on:
- artifactory
artifactory:
image: docker.bintray.io/jfrog/artifactory-pro:latest
volumes:
- artifactory5_data:/var/opt/jfrog/artifactory
environment:
- VIRTUAL_HOST=artifactory.test
- VIRTUAL_PORT=8081
depends_on:
- node
node:
build:
context: ./nodes
dockerfile: Dockerfile
volumes:
artifactory5_data:
The Dockerfile that builds node
is just an instance of puppet/puppet-agent-ubuntu
with an entry point script that loops puppet runs to keep the container open.
The command I use to start the environment is:
docker-compose --project-name preso up -d --scale node=3
Creating network "preso_default" with the default driver
Creating preso_node_1 ... done
Creating preso_node_2 ... done
Creating preso_node_3 ... done
Creating preso_artifactory_1 ... done
Creating nginx-proxy ... done
docker ps --all --no-trunc
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d390506f4149b6f386376f94dad1c2d34cce11d869b2033e72646856c5f9a47b jwilder/nginx-proxy "/app/docker-entrypoint.sh forego start -r" 45 seconds ago Up 43 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx-proxy
1695bc05d4bd1ea0c08ec82b636ce5847649f9aa8b48814d44d5986c8577f29d docker.bintray.io/jfrog/artifactory-pro:latest "/entrypoint-artifactory.sh" 46 seconds ago Up 43 seconds 8081/tcp preso_artifactory_1
291965e80148f4670b32ef0bded891c79ef361161d3860fd33707f4805d004f0 preso_node "/bin/bash /entrypoint.sh" 47 seconds ago Up 44 seconds preso_node_3
d81f4e2a22b5d8e56e8764029f5ae0b0666e353937a70c825cce1a2c5d2d1f3a preso_node "/bin/bash /entrypoint.sh" 47 seconds ago Up 44 seconds preso_node_2
b64038d2c3ca32939686eb2cc9324cc5e935df5445570a8746d80c527b3fe95d preso_node "/bin/bash /entrypoint.sh" 47 seconds ago Up 44 seconds preso_node_1
Artifactory loads fine from a command line on my local machine and in the browser, but from bash inside one of the node
containers I get:
curl --insecure https://artifactory.test/artifactory
curl: (7) Failed to connect to artifactory.test port 443: Connection refused
A ping gets me:
Pinging artifactory.test [127.0.0.1] with 32 bytes of data:
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128
Update: I tried adding the nginx-proxy host name to the hosts file of the container:
echo 'nginx-proxy artifactory.test' >> /etc/hosts
This did not turn out to work. Pinging artifactory.test
now still sends connections to localhost:
Pinging artifactory.test [127.0.0.1] with 32 bytes of data:
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128
While pinging nginx-proxy returns:
Pinging nginx-proxy [172.21.0.6] with 32 bytes of data:
Reply from 172.21.0.6: bytes=32 time=0ms TTL=128
Note: I see now that trying to redirect one host name to another via hosts was never going to work.
If I add the IP address as a hostsfile entry to artifactory.test then everything works exactly as it should.
The problem with this approach is I don't know how reliably that address will be assigned to the nginx-proxy container in the environment. If I build that hosts entry into the node
containers, will it just always work?