0

On a local server I started several instances of gunicorn each running a different web app on a unique port as a service to the rest of the internal network. Now I want to put each instance of gunicorn behind its own nginx reverse-proxy server for better load balancing.

I decided to try to deploy each (nginx, gunicorn) pair in its own Docker container, exposing only its unique port to the outside world. I know that's easy enough with port-forwarding - for example "5555:80". However, each app also accesses external services, such as databases, that run on the same host independently of Docker.

Through trial and error, I found that a Docker container can access an external service (e.g. MySQL, or MongoDB) only if I run it as "docker run --network=host ...". This has the effect letting the container share the the host network, but it also exposes any ports gunicorn opens, which means that it leaves a way for a client to circumvent the reverse-proxy. While this is all running on a local server on a secure network, it doesn't seem like a good security practice, as it leaves the back-end open to denial-of-service attacks.

So I guess I want the best of both worlds - I want each (proxy, gunicorn) pair to talk with one another over a private network that only they use, while I expose one port (e.g. "5555") to the network, and the web app running under gunicorn can still access other services on the same host.

My nginx.conf would look something like this:

http {
    upstream app {
        server network1_app;
    }

    server {
        location / {
            proxy_pass  http://app;
        }
    }
}

And my docker-compose.yml might look something like:

version: "3"
services:

  proxy:
    image: nginx:alpine
    networks:
      - host
      - network1
    volumes:
      ./nginx.conf:/etc/nginx/nginx.conf:ro

  app:
    build: ./app
    networks:
      - network1
    ports: "5555:80" # Do I have to associate this with the "host" network somehow?

  networks:
    network1:

Then deploy this with

docker stack deploy network1 -c docker-compose.yml

Am I on the right path, or am I making it too complicated? Would it be more straightforward not to use Docker at all for this? Instead, I could create named sockets for this.

I like using docker-compose if I can, because it encapsulates some of the details and makes management easier (if it works). It also leaves less surface for a security breach.

Lawrence I. Siden
  • 9,191
  • 10
  • 43
  • 56
  • 1
    If you can make each `docker-compose.yml` file self-contained (and run a separate database per service) that's probably the best path. If you really have to access the host system, [From inside of a Docker container, how do I connect to the localhost of the machine?](https://stackoverflow.com/q/24319662/10008173) discusses this in some detail. I tend to think of host networking as more of a last-resort setup that disables most of the Docker network stack. – David Maze May 04 '20 at 22:55
  • Wow, that's exactly the type of question I was looking for, but it didn't pop up in my Google search results, or I must have missed it. Thank you! – Lawrence I. Siden May 05 '20 at 14:57

1 Answers1

1

After reading this thread and a little trial and error, I just figured out that the problem is that the host iptables drops requests to almost anything but port 22 (ssh).

For example:

$ docker run -it --rm --add-host host.docker.internal:xxx.xxx.xxx.xxx busybox telnet host.docker.internal 27017

times out.

But

$ docker run -it --rm --add-host host.docker.internal:xxx.xxx.xxx.xxx busybox telnet host.docker.internal 22
Connected to host.docker.internal

I could probably overcome this with a simple rule in iptables:

-A INPUT -i docker0 -j ACCEPT

Then in my docker-compose.yaml I would add a section:

extra-hosts:
  - "host.docker.internal:xxx.xxx.xxx.xxx"

I'm holding off for now because my project lead asked me to wait.

Lawrence I. Siden
  • 9,191
  • 10
  • 43
  • 56