212

I am using Nginx as a reverse proxy that takes requests then does a proxy_pass to get the actual web application from the upstream server running on port 8001.

If I go to mywebsite.example or do a wget, I get a 504 Gateway Timeout after 60 seconds... However, if I load mywebsite.example:8001, the application loads as expected!

So something is preventing Nginx from communicating with the upstream server.

All this started after my hosting company reset the machine my stuff was running on, prior to that no issues whatsoever.

Here's my vhosts server block:

server {
    listen   80;
    server_name mywebsite.example;

    root /home/user/public_html/mywebsite.example/public;

    access_log /home/user/public_html/mywebsite.example/log/access.log upstreamlog;
    error_log /home/user/public_html/mywebsite.example/log/error.log;

    location / {
        proxy_pass http://xxx.xxx.xxx.xxx:8001;
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

And the output from my Nginx error log:

2014/06/27 13:10:58 [error] 31406#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: xxx.xx.xxx.xxx, server: mywebsite.example, request: "GET / HTTP/1.1", upstream: "http://xxx.xxx.xxx.xxx:8001/", host: "mywebsite.example"
Stephen Ostermiller
  • 23,933
  • 14
  • 88
  • 109
Dave Roma
  • 2,449
  • 3
  • 18
  • 13

11 Answers11

224

Probably can add a few more line to increase the timeout period to upstream. The examples below sets the timeout to 300 seconds :

proxy_connect_timeout       300;
proxy_send_timeout          300;
proxy_read_timeout          300;
send_timeout                300;
Synchro
  • 35,538
  • 15
  • 81
  • 104
user2540984
  • 2,307
  • 1
  • 12
  • 5
  • 14
    I think that increasing the timeout is seldom the answer unless you know your network/service will always or in some cases respond very slowly. Few web requests nowadays should take more than a few seconds unless you are downloading content (files/images) – Almund Apr 13 '16 at 05:15
  • 1
    @Almund I thought the same thing (almost didn't bother trying this), but for whatever reason this just worked for me. (Previously timed out after 60 sec, now get response immediately). – Dax Fohl Apr 18 '16 at 11:38
  • @Dax Fohl: That's curious. I pulled down the source and had a quick look and from what I can see, setting any proxy_ setting aside from the proxy_pass will initialize a bunch of settings which I presume will run the proxy in a different way so maybe setting anything will give this same behavior. – Almund Apr 18 '16 at 11:56
  • 1
    Did not solve the problem for me using it with a nodejs server – vpx Oct 08 '16 at 23:15
  • 8
    I find that I only need the `proxy_read_timeout` when debugging on the backend. thanks! – Jeff Puckett Mar 26 '18 at 21:37
  • 6
    Where specifically should we add these lines? – Micheal J. Roberts Jan 10 '20 at 10:15
  • @MichealJ.Roberts you have to add this in nginx config – Sumit Sharma Mar 31 '20 at 06:39
  • 1
    I had to change the value of `client_max_body_size` from default `1m` to `5m`, to solve the problem with this error. I was proxying to a Flask application which would take the upload of an image and do some processing on it. For some larger files I was getting this error. To fix this error, I did not change any of the timeout values from default. – b.sodhi Jun 01 '20 at 10:24
  • My case was solved by adding the keep-alive header as @Almund suggested. – pavsaund Aug 20 '20 at 07:52
  • Is it actually necessary to set `proxy_connect_timeout` to something over 75s? Isn't this config variable for raw TCP SYN/stall? Related: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout – Artfaith Sep 09 '22 at 02:39
136

Increasing the timeout will not likely solve your issue since, as you say, the actual target web server is responding just fine.

I had this same issue and I found it had to do with not using a keep-alive on the connection. I can't actually answer why this is but, in clearing the connection header I solved this issue and the request was proxied just fine:

server {
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_pass http://localhost:5000;
    }
}

Have a look at this posts which explains it in more detail:

djvg
  • 11,722
  • 5
  • 72
  • 103
Almund
  • 5,695
  • 3
  • 31
  • 35
  • 20
    MONTHS of problems solved by a single line `proxy_set_header Connection "";` lol, dont use runcloud – nodws Dec 22 '17 at 17:57
  • We had a proxy that was timing out of the source took more than 5 seconds to respond. This did the trick. Thank you! – MattS Nov 05 '21 at 21:19
  • 4
    Thank you for this. This is the official explanation for why HTTP 1.1 is necessary. By default NGINX uses HTTP/1.0 for connections to upstream servers and accordingly adds the Connection: close header to the requests that it forwards to the servers. The result is that each connection gets closed when the request completes, despite the presence of the keepalive directive in the upstream{} block. Source: https://www.nginx.com/blog/avoiding-top-10-nginx-configuration-mistakes/#no-keepalives – Mentakatz Jun 07 '23 at 14:43
38

user2540984, as well as many others have pointed out that you can try increasing your timeout settings. I myself faced a similar issue to this one and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This, however, did not help me a single bit; there was no apparent change in NGINX' timeout settings. After many hours of searching, I finally managed to solve my issue.

The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn't exist, you should create it). I used the same settings as suggested in the thread:

proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;

This might not be the solution to your particular problem, but if anyone else notices that the timeout changes in /etc/nginx/nginx.conf don't do anything, I hope this answer helps!

Andreas Forslöw
  • 2,220
  • 23
  • 32
  • 1
    hi there is no timeout.conf in my config.d directory. You said create it, and I want to confirm it just add the above setting in the timeout.conf? – tktktk0711 Jul 10 '19 at 02:20
  • Yes, just add them. You can modify them for your own needs, but these worked for me! – Andreas Forslöw Jul 23 '19 at 08:01
  • 2
    Unfortunately, in Laravel homestead with ubuntu and Nginx, this does not work. :( Do you mean just to add those lines? without `server{}` or something else? This error comes out right after 5 minutes. I reload, reboot, and it never makes it through beyond those 5 minutes or 300 seconds. Are there more ideas to fix it? – Pathros May 01 '20 at 22:53
  • In your nginx.conf main configuration file you have not mentioned where this timeout.conf file is included. In the end, Nginx have only one configuration file which includes all .conf files. I think it worked at your end because you increased timeout to 600. – Rohit Gaikwad Mar 01 '21 at 04:35
27

If you want to increase or add time limit to all sites then you can add below lines to the nginx.conf file.

Add below lines to the http section of /usr/local/etc/nginx/nginx.conf or /etc/nginx/nginx.conf file.

fastcgi_read_timeout 600;
proxy_read_timeout 600;

If the above lines doesn't exist in conf file then add them, otherwise increase fastcgi_read_timeout and proxy_read_timeout to make sure that nginx and php-fpm did not timeout.

To increase time limit for only one site then you can edit in vim /etc/nginx/sites-available/example.com

location ~ \.php$ {
    include /etc/nginx/fastcgi_params;
        fastcgi_pass  unix:/var/run/php5-fpm.sock;
    fastcgi_read_timeout 300; 
}

and after adding these lines in nginx.conf, then don't forget to restart nginx.

service php7-fpm reload 
service nginx reload

or, if you're using valet then simply type valet restart.

Adeel
  • 2,901
  • 7
  • 24
  • 34
23

You can also face this situation if your upstream server uses a domain name, and its IP address changes (e.g.: your upstream points to an AWS Elastic Load Balancer)

The problem is that nginx will resolve the IP address once, and keep it cached for subsequent requests until the configuration is reloaded.

You can tell nginx to use a name server to re-resolve the domain once the cached entry expires:

location /mylocation {
    # use google dns to resolve host after IP cached expires
    resolver 8.8.8.8;
    set $upstream_endpoint http://your.backend.server/;
    proxy_pass $upstream_endpoint;
}

The docs on proxy_pass explain why this trick works:

Parameter value can contain variables. In this case, if an address is specified as a domain name, the name is searched among the described server groups, and, if not found, is determined using a resolver.

Kudos to "Nginx with dynamic upstreams" (tenzer.dk) for the detailed explanation, which also contains some relevant information on a caveat of this approach regarding forwarded URIs.

el.atomo
  • 5,200
  • 3
  • 30
  • 28
  • 5
    this answer is gold, exactly what happened to me. upstream points to aws elb and all the sudden Gateway timeout. – Nathan Do Apr 23 '20 at 06:04
  • great answer ! managed to solved it – Eyal Solomon Oct 13 '22 at 09:20
  • 1
    I had the same issue with AWS upstream endpoints. Using an external resolver fixed it. I was able to trace the upstream defects by logging out upstream IP in access.log. – Matt Lo Apr 28 '23 at 15:45
5

nginx

proxy_read_timeout          300;

In my case with AWS, I edited load balance setting also. Attributes => Idle timeout

Jeff Gu Kang
  • 4,749
  • 2
  • 36
  • 44
3

Had the same problem. Turned out it was caused by iptables connection tracking on the upstream server. After removing --state NEW,ESTABLISHED,RELATED from the firewall script and flushing with conntrack -F the problem was gone.

lloiacono
  • 4,714
  • 2
  • 30
  • 46
mindlab
  • 89
  • 4
2

If you're using a cloud provider and experiencing issues with NGINX, NGINX itself may not be the root cause.

Check the value of the minimum ports per VM instance setting on the NAT Gateway that sits between your NGINX instance(s) and the proxy_pass destination. * IF * the value is too small for the number of concurrent requests, increase it to resolve the problem.

For example, on Google Cloud, a case where a reverse proxy NGINX was placed inside a subnet with a NAT Gateway, requests are proxied to an API URL associated with the backend (upstream) through the NAT Gateway.

Refer to GCP's documentation on how NAT Gateway relates to the NGINX 504 timeout.

snowpeak
  • 797
  • 9
  • 25
2

In my case i restart php for and it become ok.

Mahdi Aslami Khavari
  • 1,755
  • 15
  • 23
1

Adding following values in the /etc/nginx/nginx.conf fixed the issue for me.

proxy_connect_timeout 600;
proxy_send_timeout   600;
proxy_read_timeout   600;
send_timeout         600;
Mukesh
  • 7,630
  • 21
  • 105
  • 159
0

If nginx_ajp_module is used, try adding ajp_read_timeout 10m; in nginx.conf file.