25

I was wondering and trying to figure out how these two settings:

proxy_buffers [number] [size];

may affect (improve / degrade) proxy server performance, and whether to change buffers' size, or the number, or both...?

In my particular case, we're talking about a system serving dynamically generated binary files, that may vary in size (~60 - 200kB). Nginx serves as a load-balancer in front of 2 Tomcats that act as generators. I saw in Nginx's error.log that with default buffers' size setting all of proxied responses are cached to a file, so what I found to be logical is to change the setting to something like this:

proxy_buffers 4 32k;

and the warning message disappeared.

What's not clear to me here is if I should preferably set 1 buffer with the larger size, or several smaller buffers... E.g.:

proxy_buffers 1 128k; vs proxy_buffers 4 32k; vs proxy_buffers 8 16k;, etc...

What could be the difference, and how it may affect performance (if at all)?

Less
  • 3,047
  • 3
  • 35
  • 46

1 Answers1

28

First, it's a good idea to see what the documentation says about the directives:

Syntax: proxy_buffers number size;
Default: proxy_buffers 8 4k|8k;
Context: http, server, location

Sets the number and size of the buffers used for reading a response from the proxied server, for a single connection. By default, the buffer size is equal to one memory page. This is either 4K or 8K, depending on a platform.

The documentation for the proxy_buffering directive provides a bit more explanation:

When buffering is enabled, nginx receives a response from the proxied server as soon as possible, saving it into the buffers set by the proxy_buffer_size and proxy_buffers directives. If the whole response does not fit into memory, a part of it can be saved to a temporary file on the disk. …

When buffering is disabled, the response is passed to a client synchronously, immediately as it is received. …


So, what does all of that mean?

  1. An increase of buffer size would apply per connection, so even 4K would be quite an increase.

  2. You may notice that the size of the buffer is by default equivalent to platform page. Long story short, choosing the "best" number might as well go beyond the scope of this question, and may depend on operating system and CPU architecture.

  3. Realistically, the difference between a bigger number of smaller buffers, or a smaller number of bigger buffers, may depend on the memory allocator provided by the operating system, as well as how much memory you have and how much memory you want to be wasted by being allocated without being used for a good purpose.

    E.g., I would not use proxy_buffers 1 1024k, because then you'll be allocating a 1MB buffer for every buffered connection, even if the content would easily fit in 4KB, that would be wasteful (although, of course, there's also the little-known fact that unused-but-allocated-memory is virtually free since 1980s). There's likely a good reason that the default number of buffers was chosen to be 8 as well.

  4. Increasing the buffers at all might actually be a bit pointless if you do caching of the responses of these binary files with the proxy_cache directive, because Nginx will still be writing it to disk for caching, and you might as well not waste the extra memory for buffering these responses.

A good operating system should be capable of already doing appropriate caching of the stuff that gets written to disk, through the file-system buffer-cache functionality. There is also the somewhat strangely-named article at Wikipedia, as "disk-buffer" name was already taken for the HDD hardware article.

All in all, there's likely little need to duplicate buffering directly within Nginx. You might also take a look at varnish-cache for some additional ideas and inspiration about the subject of multi-level caching. The fact is, "good" operating systems are supposed to take care of many things that some folks mistakenly attempt to optimise through application-specific functionality.

  1. If you don't do caching of responses, then you might as well ask yourself whether or not buffering is appropriate in the first place.

    Realistically, buffering may come useful to better protect your upstream servers from the Slowloris attack vector — however, if you do let your Nginx have megabyte-sized buffers, then, essentially you're exposing Nginx itself for consuming an unreasonable amount of resources to service clients with malicious intents.

  2. If the responses are too large, you might want to look into optimising things at the response level. E.g. doing splitting of some content into individual files; doing compression on the file level; doing compression with gzip with HTTP Content-Encoding etc.


TL;DR: this is really a pretty broad question, and there are too many variables that require non-trivial investigation to come up with the "best" answer for any given situation.

Armen Michaeli
  • 8,625
  • 8
  • 58
  • 95
cnst
  • 25,870
  • 6
  • 90
  • 122
  • @aland BTW, another thing I forgot to mention — if you simply use the upstream for authorisation purposes, and files are already generated and/or available on disc, then you might also want to use `X-Accel-Redirect` functionality as per http://nginx.org/r/proxy_ignore_headers and http://nginx.org/r/internal – cnst Jun 17 '18 at 20:49
  • 1
    This was very helpful. Also got help from: How many nginx buffers is too many? "https://stackoverflow.com/questions/16627358/how-many-nginx-buffers-is-too-many" – DDS Feb 27 '19 at 15:31
  • TL; DR — Increase the numbers, keep the size. Don’t mind static files because they’ll be cached in a disk anyway. – Константин Ван Jun 07 '21 at 04:56