0

I have a docker-compose file that uses the jwilder/nginx-proxy to provide ssl for the Artifactory image I have running.

Everything works fine from connections outside the compose environment's containers. So my browser can load the Artifactory web app just fine and it's ssl encrypted, and all of the API's work fine from command line tools.

The problem is that if I'm inside one of the containers in the environment, I can ping the other containers in the environment, but if I attempt to load a page from the Artifactory container, I get errors saying the connection is refused.

Here is my compose file:

version: '3'
services:

  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./assets/certs:/etc/nginx/certs
    depends_on:
      - artifactory

  artifactory:
    image: docker.bintray.io/jfrog/artifactory-pro:latest
    volumes:
      - artifactory5_data:/var/opt/jfrog/artifactory
    environment:
      - VIRTUAL_HOST=artifactory.test
      - VIRTUAL_PORT=8081
    depends_on:
      - node

  node:
    build:
      context: ./nodes
      dockerfile: Dockerfile

volumes:
  artifactory5_data:

The Dockerfile that builds node is just an instance of puppet/puppet-agent-ubuntu with an entry point script that loops puppet runs to keep the container open.

The command I use to start the environment is:

docker-compose --project-name preso up -d --scale node=3                                                                                                      
Creating network "preso_default" with the default driver
Creating preso_node_1 ... done
Creating preso_node_2 ... done
Creating preso_node_3 ... done
Creating preso_artifactory_1 ... done
Creating nginx-proxy         ... done

docker ps --all --no-trunc
CONTAINER ID                                                       IMAGE                                            COMMAND                                       CREATED             STATUS                  PORTS                                      NAMES
d390506f4149b6f386376f94dad1c2d34cce11d869b2033e72646856c5f9a47b   jwilder/nginx-proxy                              "/app/docker-entrypoint.sh forego start -r"   45 seconds ago      Up 43 seconds           0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   nginx-proxy
1695bc05d4bd1ea0c08ec82b636ce5847649f9aa8b48814d44d5986c8577f29d   docker.bintray.io/jfrog/artifactory-pro:latest   "/entrypoint-artifactory.sh"                  46 seconds ago      Up 43 seconds           8081/tcp                                   preso_artifactory_1
291965e80148f4670b32ef0bded891c79ef361161d3860fd33707f4805d004f0   preso_node                                       "/bin/bash /entrypoint.sh"                    47 seconds ago      Up 44 seconds                                                      preso_node_3
d81f4e2a22b5d8e56e8764029f5ae0b0666e353937a70c825cce1a2c5d2d1f3a   preso_node                                       "/bin/bash /entrypoint.sh"                    47 seconds ago      Up 44 seconds                                                      preso_node_2
b64038d2c3ca32939686eb2cc9324cc5e935df5445570a8746d80c527b3fe95d   preso_node                                       "/bin/bash /entrypoint.sh"                    47 seconds ago      Up 44 seconds                                                      preso_node_1

Artifactory loads fine from a command line on my local machine and in the browser, but from bash inside one of the node containers I get:

curl --insecure https://artifactory.test/artifactory
curl: (7) Failed to connect to artifactory.test port 443: Connection refused

A ping gets me:

Pinging artifactory.test [127.0.0.1] with 32 bytes of data:
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128

Update: I tried adding the nginx-proxy host name to the hosts file of the container:

echo 'nginx-proxy artifactory.test' >> /etc/hosts

This did not turn out to work. Pinging artifactory.test now still sends connections to localhost:

Pinging artifactory.test [127.0.0.1] with 32 bytes of data:
Reply from 127.0.0.1: bytes=32 time=0ms TTL=128

While pinging nginx-proxy returns:

Pinging nginx-proxy [172.21.0.6] with 32 bytes of data:
Reply from 172.21.0.6: bytes=32 time=0ms TTL=128

Note: I see now that trying to redirect one host name to another via hosts was never going to work.

If I add the IP address as a hostsfile entry to artifactory.test then everything works exactly as it should.

The problem with this approach is I don't know how reliably that address will be assigned to the nginx-proxy container in the environment. If I build that hosts entry into the node containers, will it just always work?

Bill Hurt
  • 749
  • 1
  • 8
  • 26
  • Why are you appending `.test` to the name? – Shashank V Jan 23 '20 at 06:20
  • It's just a common convention for test domain names. Makes it look a little less slapdash when I enter the URL into my browser address bar. – Bill Hurt Jan 23 '20 at 06:31
  • Within the containers created by docker-compose, you have to use the name `artifactory` because that's what you named your service in the compose file. Can you try that? – Shashank V Jan 23 '20 at 06:55
  • I've tried that yes. I can get it to work like `curl http://artifactory:8081`. The problem is that this is communicating directly with the container, not via the proxy. I'm trying to get the connection to work via the proxy so that the connection is SSL encrypted. Is that a critical security concern? Probably not, but it's bothering me that I can't get it to work. – Bill Hurt Jan 23 '20 at 07:01
  • I don't undertand. If you want to connect via nginx-proxy, shouldn't you be using `nginx-proxy` as hostname inside the container network instead of `artifactory.test`? – Shashank V Jan 23 '20 at 07:05
  • If I use the nginx-proxy hostname directly, then there is no hostheader in the request to tell nginx where to forward the request. The request arrives directly at the nginx-proxy host name with no indication of where the request should be forwarded so the connection dies. – Bill Hurt Jan 23 '20 at 07:08
  • Alright, I just read about the the way `jwilder/nginx-proxy` works. I think you have added `artifactory.test` as `127.0.0.1` in your host machine hosts file and it is not actually resolved by DNS? Try using a DNS resolvable name to your host machine. It might be failing because the container also must be resolving it to `127.0.0.1`. – Shashank V Jan 23 '20 at 07:31
  • That said, If the purpose of using nginx-proxy is SSL termination, I don't think it is need in your case for inter-container communication as all containers are running on same host machine. – Shashank V Jan 23 '20 at 07:33

2 Answers2

0

While I concede that I probably don't actually need this to work since it's an internal network, the purpose here is to do a demo to an audience of what installing and using Artifactory in a business might look like, so using internal docker host names for some demo scripts could cause confusion for some viewers. Things need to look as normal and understandable as possible, and that means using the same host names in command line commands run in the containers as I use in my machines browser.

So that said, I did get this to work. The trick is defining a custom internal network on which I can control the IP address that will get assigned. Knowing the IP address that will be assigned I can also ensure that all of my node containers have a custom hosts entry that will know how to route the requests properly. Below is the final compose file.

Note: If you don't explicitly add the default network back to the nginx-proxy service as shown, requests to artifactory.test will return a 502 Bad Gateway response instead of forwarding the request as intended.

version: '3'
services:

  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./assets/certs:/etc/nginx/certs
    depends_on:
      - artifactory
    networks:
      demo:
        ipv4_address: 10.5.0.100
      default:

  artifactory:
    container_name: artifactory
    image: docker.bintray.io/jfrog/artifactory-pro:latest
    volumes:
      - artifactory5_data:/var/opt/jfrog/artifactory
    environment:
      - VIRTUAL_HOST=artifactory.test
      - VIRTUAL_PORT=8081
    depends_on:
      - node

  node:
    build:
      context: ./nodes
      dockerfile: Dockerfile
    extra_hosts:
      - "artifactory.test:10.5.0.100"
    networks:
      demo:

volumes:
  artifactory5_data:

networks:
  demo:
    ipam:
     config:
       - subnet: 10.5.0.0/16

Once that is in place, a curl request to the artifactory.test host name will route through the proxy with SSL termination as intended.

curl --insecure https://artifactory.test/artifactory/api/nuget/nuget-local
<?xml version="1.0" encoding="utf-8"?>
<!--
  ~
  ~ Copyright 2016 JFrog Ltd. All rights reserved.
  ~ JFROG PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
  -->

<service xmlns="http://www.w3.org/2007/app" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xml:base="https://artifactory.test/artifactory/api/nuget/nuget-local">
    <workspace>
        <atom:title>Default</atom:title>
        <collection href="Packages">
            <atom:title>Packages</atom:title>
        </collection>
    </workspace>
</service>
Bill Hurt
  • 749
  • 1
  • 8
  • 26
0

i use jwilder as my reverse proxy for all my services in production environment and there is no issue with that. you have to check steps below:

1 - first check cert and key file names, in your case it should be artifactory.test.crt and artifactory.test.key

2 - second exec into artifactory container by "docker exec -it artifactory SHELL" then looking for listening ports and make sure your app listens on 8081, then check container with docker inspect artifactoryContainerName and looking for exposed ports

if above was ok , then provide me jwilder container logs

  • My answer above took care of it. The problem was that without specifying the IP address for the container ahead of time I had no way to make a hosts entry for my fake domain name. – Bill Hurt Jan 23 '20 at 15:12