5

This issue happens on a pure PHP files served by Nginx & PHP-FPM. I've stumbled upon this issue while developing my website using Symfony but the problematic content length range is 3702-15965 for that (I wonder why it's different than vanilla PHP).

What I've tried so far:

  • Timeout duration is 15 seconds but I've tried increasing it to 300 seconds and it still timeouts. So I'm guessing it's infinite loopy thing.
  • It doesn't look like it's resource related because it works even if content length is 5 million characters.
  • Created various tests with different characters to see if I can cause changes to the problematic content length range. Answer is no, range stayed same for all my tests.
  • I have tried disabling gzip. It didn't change the length range but the response changed. Gzip enabled response: "upstream request timeout" | Gzip disabled response: Completely blank

Notes:

  • This issue doesn't exist on my localhost.
  • It rarely opens the page normally. I can't reproduce this consistently.
  • There are no errors in Nginx, PHP or GCR logs besides the "request timed out" lines.

Any help is appreciated. Thanks.

Taylan
  • 3,045
  • 3
  • 28
  • 38

2 Answers2

9

Funnily enough, I've solved the issue while writing the question. Adding fastcgi_buffering off; to Nginx config fixes the issue.

But I still don't understand what was the problem and why disabling buffering fixed it. So if anyone can explain it I don't mind marking that answer as solution.

Taylan
  • 3,045
  • 3
  • 28
  • 38
  • 2
    Funnily enough, I encountered something similar quite recently. It turned out that I was sending more data than could fit in the fastcgi_buffer_size buffer (my response was 50KB). Bumping the fastcgi_buffer_size (along with fastcgi_buffers and fastcgi_busy_buffers_size) allowed me to keep buffering on and getting everything to work. – garbetjie Mar 16 '21 at 12:18
5

This is specific to Nginx and not Cloud Run.

When Nginx starts receiving a response from a FastGCI backend, it will buffer the header response in memory. If the response is too large for memory, a part of it can be saved to a temp file on disk which is controlled by other variables as explained here.

By disabling fastcgi_buffering, the response is passed to the client synchronously as it is received. You can find more info in these articles[1][2][3]

[1] upstream sent too big header while reading response header from upstream

[2] Nginx upstream sent too big header while reading response header from upstream

[3] https://gist.github.com/magnetikonline/11312172#determine-fastcgi-response-sizes

hakre
  • 193,403
  • 52
  • 435
  • 836
MaryM
  • 164
  • 4
  • 1
    Thanks for the links but I don't think this explains why timeout happens **only** if content is between 4013-8092 characters (or 3702-15965 when using Symfony). – Taylan Aug 08 '20 at 06:10
  • PHP internal buffer handling 4096/8092 combined with Nginx internal buffer handling 4096/8092.Consult your Symfony handbook for the PHP buffering options they make use of (or look into the source-code if they hide it from the documentation). – hakre Aug 14 '23 at 13:08