204

We have several rails apps under common domain in Docker, and we use nginx to direct requests to specific apps.

our_dev_server.com/foo # proxies to foo app
our_dev_server.com/bar # proxies to bar

Config looks like this:

upstream foo {
  server foo:3000;
}

upstream bar {
  server bar:3000;
}

# and about 10 more...

server {
  listen *:80 default_server;

  server_name our_dev_server.com;

  location /foo {
      # this is specific to asset management in rails dev
      rewrite ^/foo/assets(/.*)$ /assets/$1 break;
      rewrite ^/foo(/.*)$ /foo/$1 break;
      proxy_pass http://foo;
  }

  location /bar {
      rewrite ^/bar/assets(/.*)$ /assets/$1 break;
      rewrite ^/bar(/.*)$ /bar/$1 break;
      proxy_pass http://bar;
  }

  # and about 10 more...
}

If one of these apps is not started then nginx fails and stops:

host not found in upstream "bar:3000" in /etc/nginx/conf.d/nginx.conf:6

We don't need them all to be up but nginx fails otherwise. How to make nginx ignore failed upstreams?

Smi
  • 13,850
  • 9
  • 56
  • 64
Morozov
  • 2,719
  • 2
  • 21
  • 23
  • 1
    Are you linking the app containers with the Nginx containers, or running them separate from each other? If the host within the `upstream` block doesn't resolve, at runtime, then Nginx will exit with the above error... – Justin Sep 29 '15 at 14:03
  • 1
    If you can use an IP then it'll start-up fine. Would using `resolver` (http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) work in your case? – Justin Sep 29 '15 at 14:05
  • @Justin we have each app in separate container, nginx too. Link them with docker – Morozov Sep 29 '15 at 14:42
  • @Justin Startup order is fine, nginx starts after other apps. We just want to run only some of them :) – Morozov Sep 29 '15 at 14:46
  • 1
    I've got a similar setup *(Nginx container with app container(s))*. We created an Nginx image that includes a `proxy.sh` script that reads environment variables and dynamically adds `upstream` entries for each, then starts Nginx. This works great in that when we run our proxy container we can pass in the needed upstreams at runtime. You could do something similar to enable/disable certain upstreams at launch *(or like my setup just add the ones needed at runtime)* – Justin Sep 29 '15 at 14:52
  • 7
    I just hate that nginx crashes. its just a stupid design. How would any buddy crashes one server just because another doesn't found how stupid design it is – Robokishan May 29 '21 at 21:00

10 Answers10

144
  1. If you can use a static IP then just use that, it'll startup and just return 503's if it doesn't respond.

  2. Use the resolver directive to point to something that can resolve the host, regardless if it's currently up or not.

  3. Resolve it at the location level, if you can't do the above (this will allow Nginx to start/run):

     location /foo {
       resolver 127.0.0.1 valid=30s;
       # or some other DNS (your company's internal DNS server)
       #resolver 8.8.8.8 valid=30s;
       set $upstream_foo foo;
       proxy_pass http://$upstream_foo:80;
     }
    
     location /bar {
       resolver 127.0.0.1 valid=30s;
       # or some other DNS (your company's internal DNS server)
       #resolver 8.8.8.8 valid=30s;
       set $upstream_bar foo;
       proxy_pass http://$upstream_bar:80;
     }
    
hooknc
  • 4,854
  • 5
  • 31
  • 60
Justin
  • 4,434
  • 4
  • 28
  • 37
  • 1
    your option 3 works great for me. If I don't specify a resolver, do you know how long nginx will cache the IP it resolves? – Riley Lark Apr 20 '16 at 20:51
  • 34
    Thanks! Just using a variable seems to keep nginx from being smart about it – Blanka May 10 '16 at 21:20
  • 1
    I found that a regex capture group allowed me to skip the variable: `location ~ ^/foo/(.*)$ { proxy_pass http://foo/$1; }` – Danny Kirchmeier Aug 30 '16 at 15:23
  • 3
    How does that work for a TCP proxy? Seems there is no way to try option 3 for tcp proxy. – krish7919 Mar 23 '17 at 07:49
  • When I try this I get: nginx: [emerg] invalid number of arguments in "set" directive in /etc/nginx/conf.d/default.conf:40 – Charlie Oct 16 '18 at 20:56
  • 2
    @Charlie those kind of errors in nginx are almost always related to **missing ";" sign** at the end of line :) – SteveB Jan 23 '19 at 22:25
  • 1
    I have the same issue here but in my case, one of the app may not even be available in some environment. This is inside a kubernetes cluster. How can I prevent nginx from failing if the app isn't running at all? I can't hard code the resolver within the nginx config file because the same app may install in different cluster – xbmono Oct 14 '19 at 05:44
  • 1
    Doing this does not seem to work for me. NGINX starts up just fine without the 'bar' container running, but when it's up, trying to navigate to http://localhost/bar redirects me to http://foo:80 and outside of docker that host is not resolvable. – Rudy Dec 17 '19 at 19:55
  • 1
    It seems this isn't truly working: nginx will start fine, but after 30 seconds the docker will still crash. – paul23 Jan 08 '20 at 17:54
  • This does depend on the upstream server. For instance, Jira accepts it, but Confluence does not, despite both being Atlassian products. It's undocumented what `proxy_pass` actually does, so it's unclear how Confluence can determine that a variable was used. But the result is that Confluence responds with a 302 redirect that returns the same URL, so the browser detects the 302 loop. – MSalters Aug 31 '20 at 12:29
  • 1
    The `$upstream` workaround is great! Thanks. ... for `http` `site` servers. What is the equivalent for `tcp` `stream` servers? This syntax seems to not be allowed. – Jesse Chisholm Sep 18 '20 at 02:21
  • The variable trick does not work if the upstream is a load balancer (more than one server). nginx still signals host down – Stefan Anghel Nov 26 '21 at 19:48
  • this does not work with `stream{}` block, though. `nginx: [emerg] "set" directive is not allowed here in` :'( – Sang Apr 07 '22 at 07:04
  • If you try to connect to a docker container choose 127.0.0.11 for location. – niels Jan 28 '23 at 18:42
52

For me, option 3 of the answer from @Justin/@duskwuff solved the problem, but I had to change the resolver IP to 127.0.0.11 (Docker's DNS server):

location /foo {
  resolver 127.0.0.11 valid=30s;
  set $upstream_foo foo;
  proxy_pass http://$upstream_foo:80;
}

location /bar {
  resolver 127.0.0.11 valid=30s;
  set $upstream_bar bar;
  proxy_pass http://$upstream_bar:80;
}

But as @Justin/@duskwuff mentioned, you could use any other external DNS server.

DJDaveMark
  • 2,669
  • 23
  • 35
neumann
  • 1,105
  • 10
  • 10
  • 3
    did you mean `set $upstream_bar bar;` under `location /bar`? i know it's an old answer. but it's relevant for anyone who's looking for a Docker-specific solution. and i kinda find the example confusing. the only explanation i could think of was bar instead of foo. – kevinnls Jul 23 '21 at 20:48
  • @kevinnls Yes he did. I fixed the above code. – DJDaveMark Mar 18 '22 at 09:04
  • 1
    For any interested reader: Setting `resolver` explicitly in nginx config is required because otherwise DNS names can only be resolved during startup. See here for more detail https://stackoverflow.com/a/40331256/1114532. Apparently nginx doesn't read /etc/resolv.conf and just fails all lookups without the `resolver` directive. – Seoester Mar 21 '22 at 21:32
24

The main advantage of using upstream is to define a group of servers than can listen on different ports and configure load-balancing and failover between them.

In your case you are only defining 1 primary server per upstream so it must to be up.

Instead, use variables for your proxy_pass(es) and remember to handle the possible errors (404s, 503s) that you might get when a target server is down.

Example of using a variable:

server {
  listen 80;
  set $target "http://target-host:3005";  # Here's the secret
  location / { proxy_pass $target; }
}
emyller
  • 2,648
  • 1
  • 24
  • 16
danielgpm
  • 1,609
  • 12
  • 26
  • 3
    > Instead, use variables for your proxy_pass(es) and remember to handle the possible errors (404s, 503s) that you might get when a target server is down. Can you elaborate on how to do that? If I do `set $variable http://foo` and `proxy_pass $variable` and keep the foo "upstream" (to keep the advantages you mentioned) then I'm still hitting the issue mentioned by OP. – Tibor Vass Jun 01 '18 at 19:29
  • 8
    As you can see in other examples, it will be `set $variable foo` and `proxy_pass http://$variable` – danielgpm Jun 01 '18 at 23:01
  • 2
    @danielgpm As you stated, using the variable for proxy_pass works perfectly and solved my issue. It would help others if you can update your answer and mention that as example – Nitb Oct 30 '18 at 13:20
  • 7
    What if I have more than one, and I want to ignore the ones that can't be resolved? – talabes Jan 19 '20 at 02:06
  • Have you find any solution for that ? – Robokishan May 29 '21 at 21:01
  • I find the main advantage to be that I can list the addresses and ports in order without having to check the configs .. – Zach Smith Jun 26 '23 at 13:45
7

We had a similar problem, we solved it by dynamically including conf files with the upstream container which are generated by a side-car container that reacts on events on the docker.sock and included the files using a wildcard in the upstream configuration:

 include /etc/upstream/container_*.conf;

In case, the list is empty, we added a server entry that is permanently down - so the effective list of servers is not empty. This server entry never gets any requests

 server 127.0.0.1:10082 down; 

And a final entry that points to an (internal) server in the nginx that hosts error pages (e.g. 503)

 server 127.0.0.1:10082 backup;

So the final upstream configuration looks like this:

upstream my-service {
  include /etc/upstream/container_*.conf;
  server 127.0.0.1:10082 down; 
  server 127.0.0.1:10082 backup;

}

In the nginx configuration we added a server listening on the error port:

server {
    listen 10082;

    location / {
        return 503;
        add_header Content-Type text/plain;
    }

    error_page 503 @maintenance;
    location @maintenance {
       internal;
       rewrite ^(.*)$ /503.html break;
       root error_pages/;
    }
}

As said, the configuration file for each upstream container is generated by a script (bash,curl,jq) that interacts with the docker.socket using curl and it's rest api to get the required information (ip, port) and uses this template to generate the file.

server ${ip}:${port} fail_timeout=5s max_fails=3;
Richard Garside
  • 87,839
  • 11
  • 80
  • 93
Gerald Mücke
  • 10,724
  • 2
  • 50
  • 67
  • while it's not the most desired solution. This is the cleanest and simplest one among all. resolvers just don't make sense in the other answers for this kind of a silly issue. I wish nginx had a permanent solution for this. – Dave Doga Oz Aug 04 '23 at 22:58
2

Another quick and easy fix for someone's scenario, i can start and stop without my main server bombing out

    extra_hosts:
      - "dockerhost:172.20.0.1" # <-- static ipv4 gateway of the network ip here thats the only sorta downside but it works for me, you can ifconfig inside a container with the network to find yours, kinda a noob answer but it helped me
    networks:
      - my_network
server {
  listen 80;
  server_name servername;

  location / {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $host;

    proxy_pass https://dockerhost:12345;

    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
  }
}
Vladimir Djuricic
  • 4,323
  • 1
  • 21
  • 22
Poyser1911
  • 21
  • 1
1

I had the same "Host not found" issue because part of my host was being mapped using $uri instead of $request_uri:

proxy_pass http://one-api-service.$kubernetes:8091/auth;

And when the request changed to the auth subrequest, the $uri lost its initial value. Changing the mapping to use $request_uri instead of $uri solved my issue:

map $request_uri $kubernetes {
    # ...
}
Washington Guedes
  • 4,254
  • 3
  • 30
  • 56
1

Bases on Justin's answer, the fastest way to do the trick is to replace final host with an IP address. You need to assign a static IP address to each container with --ip 172.18.0.XXX parameter. NGINX won't crash at startup and will simply respond with 502 error if host is not available.

Run container with static IP:

docker run --ip 172.18.0.XXX something

Nginx config:

location /foo {
    proxy_pass http://172.18.0.XXX:80;
}

Refer to this post how to setup a subnet with Docker.

Ilya Shevyryaev
  • 756
  • 6
  • 8
1

https://stackoverflow.com/a/32846603/11780117

I can't add a comment, so I'll add it here.

If your original reverse proxy is written like this:

location ^~ /api {                                                
    proxy_set_header Host $http_host;                             
    proxy_set_header X-Real-IP $remote_addr;                      
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;    
    proxy_pass http://other-host:8000/api;                     
}

When a user accesses the https://you-domain/api/test?query=name URL, the PATH received by the backend server is /api/test?query=name, it's working.

location ^~ /api {                                                
    proxy_set_header Host $http_host;                             
    proxy_set_header X-Real-IP $remote_addr;                      
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    
    resolver 127.0.0.11 valid=30s;
    set $backend other-host;
    proxy_pass http://$backend:8000/api;
}

Note that here when you request https://you-domain/api/test?query=nameURL, the PATH actually received by the backend server is /api, it loses a lot of parameters.

So when you use variables, the correct configuration should be:

location ^~ /api {                                                
    proxy_set_header Host $http_host;                             
    proxy_set_header X-Real-IP $remote_addr;                      
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    
    resolver 127.0.0.11 valid=30s;
    set $backend other-host;
    proxy_pass http://$backend:8000;
}

If you want to access the root directory of the proxy access to the backend, then you need this:

rewrite /api/(.*) /$1 break;
proxy_pass http://$backend:8000;

Then you request https://you-domain/api/test?query=name, the PATH actually received by the backend server is /test?query=name.

  • http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass says: "variables are used in `proxy_pass`....URI... is passed to the server as is, replacing the original request URI". Hence, I append the predefined variables `$uri$is_args$args` to the end of the URI to form `proxy_pass $upstream-host-and-port$uri$is_args$args` – Marcel Stör May 26 '23 at 05:53
0

For anyone wondering how to solve this problem when using Nginx Proxy Manager, here is a workaround for the problem where Nginx Proxy Manager fails to start if it can't resolve one of the upstream servers (e.g. another docker container on your unRaid server that is not currently started).

This issue only occurs with Proxy Hosts in NPM that have custom locations defined. The workaround is to remove the custom location declaration in the GUI and instead declare it manually in the 'Advanced' tab of the host like so:

location / {
    set $custom_upstream example.com;
    proxy_pass http://$custom_upstream:80;
}

Just replace example.com with the host that is not always available.

Note: I found that I found that I didn't have to set the resolver. I assume that Nginx is just using whatever its default values are.

Relevant thread in the NPM issue tracker here.

kabadisha
  • 670
  • 4
  • 15
-8

You can do not use --link option, instead you can use port mapping and bind nginx to host address.

Example: Run your first docker container with -p 180:80 option, second container with -p 280:80 option.

Run nginx and set these addresses for proxy:

proxy_pass http://192.168.1.20:180/; # first container
proxy_pass http://192.168.1.20:280/; # second container
kvaps
  • 2,589
  • 1
  • 21
  • 20