23

I have a really weird issue with NGINX.

I have the following upstream.conf file, with the following upstream:

upstream files_1 {
    least_conn;
    check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello;

    server mymachine:6006 ;
}

In locations.conf:

location ~ "^/files(?<command>.+)/[0123]" {
        rewrite ^ $command break;
        proxy_pass https://files_1 ;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

In /etc/hosts:

127.0.0.1               localhost               mymachine

When I do: wget https://mynachine:6006/alive --no-check-certificate, I get HTTP request sent, awaiting response... 200 OK. I also verified that port 6006 is listening with netstat, and its OK.

But when I send to the NGINX file server a request, I get the following error:

no live upstreams while connecting to upstream, client: .., request: "POST /files/save/2 HTTP/1.1, upstream: "https://files_1/save"

But the upstream is OK. What is the problem?

Uwe Keim
  • 39,551
  • 56
  • 175
  • 291
MIDE11
  • 3,140
  • 7
  • 31
  • 53
  • Is that the only error message? – Richard Smith Jan 31 '16 at 12:11
  • @RichardSmith: Yes. Before that, in `error.log` I get the following line: `a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000025...` – MIDE11 Jan 31 '16 at 12:13
  • Dis you manage to fix this issue? I am having the same problem trying to use Nginx as a proxy server to a java server. – user3621841 Nov 21 '16 at 18:49
  • 1
    Actually got a similar problem today. Without even changing anything. And the setup has worked with no problem for a few months. Until today. – majidarif Dec 09 '17 at 16:41

3 Answers3

14

When defining upstream Nginx treats the destination server and something that can be up or down. Nginx decides if your upstream is down or not based on fail_timeout (default 10s) and max_fails (default 1)

So if you have a few slow requests that timeout, Nginx can decide that the server in your upstream is down, and because you only have one, the whole upstream is effectively down, and Nginx reports no live upstreams. Better explained here:

https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/

I had a similar problem and you can prevent this overriding those settings.

For example:

upstream files_1 {
    least_conn;
    check interval=5000 rise=3 fall=3 timeout=120 type=ssl_hello max_fails=0;
    server mymachine:6006 ;
}
Angel Abad Cerdeira
  • 1,347
  • 1
  • 16
  • 16
  • 3
    This appears to be `health_check`, not `check`, and is a Nginx Plus (premium) feature according to docs. – mahemoff Dec 27 '18 at 13:34
  • 2
    bizarrely, if there's only one upstream it's supposed to never disable it but sometimes treats it as two if it has ipv6 and ipv4 https://stackoverflow.com/a/58924751/32453 so yeah this is right...somehow the request before this "timed out" or marked the upstream as failed, see also https://stackoverflow.com/a/52550758/32453 – rogerdpack Nov 19 '19 at 00:06
7

I had the same error no live upstreams while connecting to upstream

Mine was SSL related: adding proxy_ssl_server_name on solved it.

location / {
    proxy_ssl_server_name on;
    proxy_pass https://my_upstream;
  }
jobwat
  • 8,527
  • 4
  • 31
  • 30
0

if you're using a docker-compose setup, you have to use the service name in url instead of IP, eg:

    server{
           location / {
                proxy_pass http://com_api;
                proxy_set_header Host $host;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_set_header X-Forwarded-Port $server_port;
            }
    }
        
   upstream com_api {
      server api:6060; //<----------where api is service name
   }