40

I have a REST API that returns json responses. Sometimes (and what seems to be at completely random), the json response gets cut off half-way through. So the returned json string looks like:

...route_short_name":"135","route_long_name":"Secte // end of response

I'm pretty sure it's not an encoding issue because the cut off point keeps changing position, depending on the json string that's returned. I haven't found a particular response size either for which the cut off happens (I've seen 65kb not get cut off, whereas 40kbs would).

Looking at the response header when the cut off does happen:

{
    "Cache-Control" = "must-revalidate, private, max-age=0";
    Connection = "keep-alive";
    "Content-Type" = "application/json; charset=utf-8";
    Date = "Fri, 11 May 2012 19:58:36 GMT";
    Etag = "\"f36e55529c131f9c043b01e965e5f291\"";
    Server = "nginx/1.0.14";
    "Transfer-Encoding" = Identity;
    "X-Rack-Cache" = miss;
    "X-Runtime" = "0.739158";
    "X-UA-Compatible" = "IE=Edge,chrome=1";
}

Doesn't ring a bell either. Anyone?

samvermette
  • 40,269
  • 27
  • 112
  • 144

7 Answers7

37

I had the same problem:

Nginx cut off some responses from the FastCGI backend. For example, I couldn't generate a proper SQL backup from PhpMyAdmin. I checked the logs and found this:

2012/10/15 02:28:14 [crit] 16443#0: *14534527 open() "/usr/local/nginx/fastcgi_temp/4/81/0000004814" failed (13: Permission denied) while reading upstream, client: *, server: , request: "POST / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "", referrer: "http://*/server_export.php?token=**"

All I had to do to fix it was to give proper permissions to the /usr/local/nginx/fastcgi_temp folder, as well as client_body_temp.

Fixed!

Thanks a lot samvermette, your Question & Answer put me on the right track.

t0mm13b
  • 34,087
  • 8
  • 78
  • 110
Clement Nedelcu
  • 386
  • 3
  • 3
  • 1
    thank you so much!!! I've been pulling my hair for so long trying to solve this, who knew it would be so simple)) – Eugene Kuzmenko Jul 17 '13 at 19:45
  • For CentOS, my /var/cache/nginx was root:root ownership! So my "www-data" user didn't have access :-( Also you might want to delete your fastcgi_temp subdirs because NginX will supposedly regenerate them with the correct permissions. – PJ Brunet Feb 05 '14 at 07:22
  • 4
    This answer saves me! But for 1.8 it was `/var/cache/nginx/fastcgi_temp` folder. So I did command `chmod 777 /var/cache/nginx/ -R` – Oleg Abrazhaev Jun 11 '15 at 07:08
  • thanks a lot, save me a lot of time! For my setup i tried to give full access to different folder and somehow ti did not work so I solve it by actually overwriting the path to where the `proxy_buggering` is saving the files `proxy_temp_path new/location/` – Vlad Jan 27 '16 at 13:43
  • what do you mean by "proper permissions"? have you changed the owner of the folder? or you `chmod 777`? is it save to `chmod 777`? – shamaseen Feb 10 '21 at 13:54
32

Looked up my nginx error.log file and found the following:

13870 open() "/var/lib/nginx/tmp/proxy/9/00/0000000009" failed (13: Permission denied) while reading upstream...

Looks like nginx's proxy was trying to save the response content (passed in by thin) to a file. It only does so when the response size exceeds proxy_buffers (64kb by default on 64 bits platform). So in the end the bug was connected to my request response size.

I ended fixing my issue by setting proxy_buffering to off in my nginx config file, instead of upping proxy_buffers or fixing the file permission issue.

Still not sure about the purpose of nginx's buffer. I'd appreciate if anyone could add up on that. Is disabling the buffering completely a bad idea?

samvermette
  • 40,269
  • 27
  • 112
  • 144
  • I also used "proxy_buffering off;" which fixed my problems. Don't know any other way to do it better. – SDwarfs Oct 31 '14 at 14:40
  • Same problem. Thanks for saving me. I was at my wits’ end here. – Daniel Nov 14 '15 at 22:09
  • Thanks a lot, i tried it and it worked for me. I read more on the topic though and it seems not to be the recommended way to do. – Vlad Jan 27 '16 at 13:46
15

I had similar problem with cutting response from server.

It happened only when I added json header before returning response header('Content-type: application/json');

In my case gzip caused the issue.

I solved it by specifying gzip_types in nginx.conf and adding application/json to list before turning on gzip:

gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json;
gzip on;
Yawar
  • 11,272
  • 4
  • 48
  • 80
Dralac
  • 199
  • 2
  • 9
  • 2
    We had same issue,although server configuration was exactly as in the answer. Adding request header 'Accept Encoding:gzip' at client side solved it. – Igor Grinfeld Jun 17 '15 at 12:44
  • this fixed it for me as well for a Zend Expressive install on Apache and Ubuntu 14 & 16 – Erik Pöhler Sep 18 '16 at 23:20
2

It's possible you ran out of inodes, which prevents NginX from using the fastcgi_temp directory properly.

Try df -i and if you have 0% inodes free, that's a problem.

Try find /tmp -mtime 10 (older than 10 days) to see what might be filling up your disk.

Or maybe it's another directory with too many files. For example, go to /home/www-data/example.com and count the files:

find . -print | wc -l

PJ Brunet
  • 3,615
  • 40
  • 37
2

Thanks for the question and the great answers, it saved me a lot of time. In the end, the answer of clement and sam helped me solve my issue, so the credits go to them.

Just wanted to point out that after reading a bit about the topic, it seems it is not recommended to disable proxy_buffering since it could make your server stall if the clients (user of your system) have a bad internet connection for example.

I found this discussion very useful to understand more. The example of Francis Daly made it very clear for me:

Perhaps it is easier to think of the full process as a chain of processes.

web browser talks to nginx, over a 1 MB/s link. nginx talks to upstream server, over a 100 MB/s link. upstream server returns 100 MB of content to nginx. nginx returns 100 MB of content to web browser.

With proxy_buffering on, nginx can hold the whole 100 MB, so the nginx-upstream connection can be closed after 1 s, and then nginx can spend 100 s sending the content to the web browser.

With proxy_buffering off, nginx can only take the content from upstream at the same rate that nginx can send it to the web browser.

The web browser doesn't care about the difference -- it still takes 100 s for it to get the whole content.

nginx doesn't care much about the difference -- it still takes 100 s to feed the content to the browser, but it does have to hold the connection to upstream open for an extra 99 s.

Upstream does care about the difference -- what could have taken it 1 s actually takes 100 s; and for the extra 99 s, that upstream server is not serving any other requests.

Usually: the nginx-upstream link is faster than the browser-nginx link; and upstream is more "heavyweight" than nginx; so it is prudent to let upstream finish processing as quickly as possible.

Dr1Ku
  • 2,875
  • 3
  • 47
  • 56
Vlad
  • 1,258
  • 1
  • 14
  • 21
2

We had a similar problem. It was caused by our REST server (DropWizard) having SO_LINGER enabled. Under load DropWizard was disconnecting NGINX before it had a chance to flush it's buffers. The JSON was >8kb and the front end would receive it truncated.

Chris Kannon
  • 5,931
  • 4
  • 25
  • 35
0

I've also had this issue – JSON parsing client-side was faulty, the response was being cut off or worse still, the response was stale and was read from some random memory buffer.

After going through some guides – Serving Static Content Via POST From Nginx as well as Nginx: Fix to “405 Not Allowed” when using POST serving static while trying to configure nginx to serve a simple JSON file.

In my case, I had to use:

max_ranges 0;

so that the browser doesn't get any funny ideas when nginx adds Accept-Ranges: bytes in the response header) as well as

sendfile off;

in my server block for the proxy which serves the static files. Adding it to the location block which would finally serve the found JSON file didn't help.

Another protip for serving static JSONs would also be not forgetting the response type:

charset_types application/json;
default_type application/json;
charset utf-8;

Other searches yielded folder permission issues – nginx is cutting the end of dynamic pages and cache it or proxy buffering issues – Getting a chunked request through nginx, but that was not my case.

Community
  • 1
  • 1
Dr1Ku
  • 2,875
  • 3
  • 47
  • 56