Any idea on how many server blocks I can add in Nginx configuration? I need to use it for reverse proxy with multiple subdomains (one subdomain for each client). Can it successfully support 10,000 server blocks? Are there any benchmark studies on it?
-
As many as you want. But you probably do want one server block that would match all these hostnames. – Alexey Ten Jun 18 '15 at 09:03
-
Is there any impact on performance? Also, with 10k server blocks, how often can I do nginx reload without impacting performance too much. – Himanshu Jun 18 '15 at 09:04
-
Of course there is. Nginx will have to look for exact domain in thousands of servers. – Alexey Ten Jun 18 '15 at 09:06
-
Do you really need that many server block? I'm pretty sure, that if you do something like `username.example.com` you need one server block for `*.example.com` that will fit all your needs – Alexey Ten Jun 18 '15 at 09:07
-
Yes as each
.example.com is being reverse proxied to user specific docker container. – Himanshu Jun 18 '15 at 09:10 -
1Well, ok. You could have map for this, but it's ok to have many server blocks. Probably you'll have to adjust `server_names_hash_bucket_size` and `server_names_hash_max_size` – Alexey Ten Jun 18 '15 at 09:18
1 Answers
This is not really a question of how many you can have but how many you.could handle decently
How many you can handle efficiently will greatly depend on your hardware (not the hardware powering the containers but the actual box nginx is running on) as Nginx will most of the time try to fit the hash tables in Cache (preferably on L1 cache, as it is faster albeit small or, if unable, on L2 cache) That's the basic theory.
According to nginx documentation, each server block would take 32/64/128 bytes for the hash depending on your configuration, so even by the 1000 server-block mark you're probably not on L1 any more, which means you're probably moving from 1-2 nanosecond seek times to maybe 10-15 nanosecond seek times or more (can't remember the exact current figures... These are ballpark). Keep growing and you might even run out of L2 cache (again, this will depend on what your actual hardware is) so you're now on L3 cache or even on RAM which is even slower. Even though all caches may render hit rates of 99% or better, seek times will become an issue as traffic grows as more CPU will be dedicated to just determining where each visitor intends to go. And this is just assuming that all server blocks have exact domain names and not wildcards or regex which would impact performance even further.
Can it be done? Of course... Just get a sturdy CPU with the biggest L1 cache you can and a big L2 cache as well. And if you absolutely must do it, stay away from wildcards and regex
Be ready to tune server_names_hash_max_size
and server_names_hash_bucket_size
config directives (you'll know you need to do it after adding the server blocks... Nginx may take unusually/unacceptably longer to restart or may not restart at all: that's your cue to changing those directives as outlined here:
http://nginx.org/en/docs/hash.html
And here:
http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size
Even if nginx does restart, you need to monitor your hardware closely as traffic starts ramping up in order to determine how serious your bottleneck may get under load. Best case scenario, you'll add a fraction of a second to each request... Worst case scenario you could bring the whole box down to it's knees (but that's really pushing it to the extreme)
Having said all that... Have you explored other options? Doing it via DNS perhaps or moving to enterprise-grade stuff like an F5 device or some other lower-level solutions?

- 3,047
- 3
- 13
- 30