15

before posting my issue, I would like to know if it is even possible to achieve what I want.

I have, lets say, myserver.com running a docker container with nginx & letsencrypt. On the same server are 2 more docker containers running websites.

For now all is redirected fine, so www.myserver.com goes to docker 1 and site2.myserver.com goes to docker 2.

I would like to have all communication running over HTTPS, but here starts the trouble. So, my question is: is it possible for the docker with nginx and letsencrypt to connect to another docker using the certificates from letsencrypt? To me it seems to be some kind of man-in-the-middle "attack". A bit more schematic:

Browse to http:// site2.myserver.com -> nginx redirects to https:// site2.myserver.com -> connect to container 2 (192.168.0.10) on port 80. Or another option: Browse to http:// site2.myserver.com -> nginx redirects to https:// site2.myserver.com -> connect to container 2 (192.168.0.10) on port 443 having the site2.myserver.com certificates.

If it can't be done, what is the solution then? Copying the certificates to the docker containers and make them run https, so that a http request gets redirected to the https port of that container?

Browse to http:// site2.myserver.com -> nginx forwards request -> connect to container 2 (192.168.0.10) on port 443 having the site2.myserver.com certificates.

Thanks, Greggy

Greggy
  • 301
  • 1
  • 4
  • 11
  • I would use [nginx-proxy](https://github.com/jwilder/nginx-proxy) with [SSL](https://github.com/jwilder/nginx-proxy#ssl-support). – rdupz Dec 21 '16 at 17:12

4 Answers4

17

As I understand it your nginx reverse proxy is on the same network as the containers, so there is not much need to secure the connection between them with TLS (as this is a private network and if an attacker has access to that network he would have access to the server, too, and all the unencrypted data).

If you absolutely want valid certificates to secure the connections on your local network you could create additional sub-domains that resolve to the local IPs. Then you will need to use the manual DNS option to get your certificate (this is a certbot option where you need to manually enter a key as a TXT entry for your domain).

Example Nginx configuration to redirect http to https

server {
    listen 80;

    server_name example.com;
    return 301 https://example.com/;
}
server{
    listen 443 ssl http2;

    server_name  example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/fullchain.pem;

    location / {
        proxy_pass http://container:8080/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    include tls.conf;
}
Nathaniel Ford
  • 20,545
  • 20
  • 91
  • 102
Paul Trehiou
  • 537
  • 5
  • 18
  • All the containers are running on 1 host. It is not needed to encrypt the data between the nginx and the containers since the containers are running on their own virtual 192.168.0.x network. The IP addresses are assigned by docker. I just want to have the http connection from the browser to be redirected to https. I find a lot of docs on how to do this on the same host, but not to another IP address (the virtual addresses). – Greggy Dec 21 '16 at 15:52
  • You can use setup Nginx to return a 301 status code redirecting to the HTTPS page – Paul Trehiou Dec 21 '16 at 15:57
  • That doesn't work for me :( It redirects to the nginx container on its own 443 port. This is what I have: site2.myserver.com { server 192.168.0.13:8069; } server { server_name site2.myserver.com; listen 80 ; return 301 https://site2.myserver.com$request_uri; location / { proxy_pass http://site2.myserver.com; }} Having the "location /" or not, doesn't make any difference. When I remove the "return 301" the http connection is established to the container. So, as soon as the connection is encrypted it points its own and not to the container any more. – Greggy Dec 21 '16 at 16:24
  • ...the first line should be: upstream site2.myserver.com { server 192.168.0.13:8069; } – Greggy Dec 21 '16 at 16:26
  • 1
    Yes you would need another server section with the HTTPS configuration – Paul Trehiou Dec 22 '16 at 10:10
3

I would go with the out of the box solution:

JWilder Nginx + Lets Encrypt.

First we start NGINX Container as Reverse Proxy:

docker run -d -p 80:80 -p 443:443 \
    --name nginx-proxy \
    -v /path/to/certs:/etc/nginx/certs:ro \
    -v /etc/nginx/vhost.d \
    -v /usr/share/nginx/html \
    -v /var/run/docker.sock:/tmp/docker.sock:ro \
    jwilder/nginx-proxy

Next we start the Lets Encrypt Container:

docker run -d \
-v /path/to/certs:/etc/nginx/certs:rw \
--volumes-from nginx-proxy \
-v /var/run/docker.sock:/var/run/docker.sock:ro \
jrcs/letsencrypt-nginx-proxy-companion

For your Websites you need some Environment variables to be set:

docker run -d \
--name website1 \
-e "VIRTUAL_HOST=website1.com" \
-e "LETSENCRYPT_HOST=website1.com" \
-e "LETSENCRYPT_EMAIL=webmaster@website1" \
tutum/apache-php

The Nginx container will create a new entry in his config, and the lets encrypt container will request a certificate (and does the renew stuff).

More: Nginx+LetsEncrypt

opHASnoNAME
  • 20,224
  • 26
  • 98
  • 143
  • I think this solution is great when you want something up quickly but that's not how you learn – Paul Trehiou Dec 22 '16 at 10:13
  • 1
    Sure if you want to learn to set this stuff up from ground than it is not the best solution :-) But it will take a lot of time to create such a setup (registering every new container, adjust nginx tmpl, request ssl cert..) – opHASnoNAME Dec 22 '16 at 10:31
  • Thanks for your answer, but I end up again with the same problem. Going to port 80 of the webserver, I can see the Tutum default page, so the redirection to the tutum container works fine. When going to the https page, I get the error 'unable to connect'. – Greggy Dec 22 '16 at 12:45
2

Here is my way to do that:

NGINX Config file (default.conf)

Using the docker image from https://github.com/KyleAMathews/docker-nginx, I did the custom default file as follows:

server {
    root /var/www;
    index index.html index.htm;

    server_name localhost MYHOST.COM;

    # Add 1 week expires header for static assets
    location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
        expires 1w;
    }

    location / {
        # First attempt to serve request as file, then
        # as directory, then fall back to redirecting to index.html
        try_files $uri $uri/ @root;

        return 301 https://$host$request_uri;
    }

    # If nginx can't find a file, fallback to the homepage.
    location @root {
        rewrite .* / redirect;
    }

    include /etc/nginx/basic.conf;
}

Dockerfile

Here is my Dockerfile, considering that my static content is under html/ directory.

COPY conf/default.conf /etc/nginx/sites-enabled/default

ADD certs/myhost.com.crt /etc/nginx/ssl/server.crt
ADD certs/myhost.com.key /etc/nginx/ssl/server.key
RUN ln -s /etc/nginx/sites-available/default-ssl /etc/nginx/sites-enabled/default-ssl

COPY html/ /var/www

CMD 'nginx'

Testing

For local test, change the file /etc/hosts by adding myhost.com to 127.0.0.1 and run the following command:

curl -I http://www.myhost.com/

Result

HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Sun, 04 Mar 2018 04:32:04 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://www.myhost.com/
X-UA-Compatible: IE=Edge
1

Good, I could finally get what I wanted by merging the answers of opHASnoNAME and Paul Trehiou. What I did as an extra for opHASnoNAME's answer is to mount a filesystem between the nginx and the letsencrypt docker. It makes it possible to link the config files of nginx to the right certificates (see later).

This is what I did:

docker run --name nginx-prod --restart always -d -p 80:80 -p 443:443 -v /choose/your/dir/letsencrypt:/etc/nginx/certs:ro -v /etc/nginx/vhost.d -v /usr/share/nginx/html -v /var/run/docker.sock:/tmp/docker.sock:ro -e DEFAULT_HOST=myserver.com jwilder/nginx-proxy

docker run --name letsencrypt --restart always -d -v /choose/your/dir/letsencrypt:/etc/nginx/certs:rw --volumes-from nginx-prod -v /var/run/docker.sock:/var/run/docker.sock:ro jrcs/letsencrypt-nginx-proxy-companion

Then run whatever container with a webserver; no need to set the LETSENCRYPT variables. My current containers can be reached without setting them.

The jwilder/nginx-proxy will list up all the running containers in /etc/nginx/conf.d/default.conf. Don't add anything into this file because it will be overwritten on the next restart. Create for each webserver a new .conf file in the same directory. This file will contain the https information as suggested by Paul Trehiou. I created for example site2.conf:

server{
    listen 443 ssl http2;
    server_name  site2.myserver.com;
    ssl_certificate /etc/nginx/certs/live/myserver.com/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/live/myserver.com/privkey.pem;
    ssl_trusted_certificate /etc/nginx/certs/live/myserver.com/fullchain.pem;

    location / {
        proxy_pass http://192.168.0.10/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

The proxy_pass address is something you can take from the default.conf file, the IP addresses are listed there for each container. To be able to back up those .conf files I will recreate my nginx container and mount a local filesystem to /etc/nginx/conf.d. It will make life also easier if the container doesn't start up because of an error in the .conf files.

Thanks a lot everybody for your input, the puzzle is complete now ;-)

Greggy
  • 301
  • 1
  • 4
  • 11