119

We have a server that is serving one html file.

Right now the server has 2 CPUs and 2GB of ram. From blitz.io, we are getting about 12k connections per minute and anywhere from 200 timeouts in that 60 seconds with 250 concurrent connections each second.

worker_processes  2;

events {
 worker_connections 1024;
}

If I increase the timeout, the response time starts creeping up beyond a second.

What else can I do to squeeze more juice out of this?

excid3
  • 1,658
  • 15
  • 31
ablemike
  • 3,384
  • 3
  • 22
  • 21

1 Answers1

206

Config file:

worker_processes  4;  # 2 * Number of CPUs

events {
    worker_connections  19000;  # It's the key to high performance - have a lot of connections available
}

worker_rlimit_nofile    20000;  # Each connection needs a filehandle (or 2 if you are proxying)


# Total amount of users you can serve = worker_processes * worker_connections

more info: Optimizing nginx for high traffic loads

Bulat
  • 2,435
  • 1
  • 15
  • 15
  • 15
    I think the equation provided for total amount of users per sec is wrong. Instead the average amount of users served per second should be = worker_processes * worker_connections / (keepalive_timeout * 2) Therefore, the above conf file can server ~7.6K connections per sec, which is way above what @ablemike needs. However, worker_rlimit_nofile is a good directive to use, if ulimit is restrictive and you don't want to modify it. – Ethan May 24 '12 at 08:15
  • 2
    @Ethan, why it should be devided by 2? If every second we get 100 new connections, and timeout is 5, then strting with sixth second, we will constantly have 5*100 connections that is still not terminated on the server side. we may have less if some users are aborted connections himself – Bulat Jun 14 '12 at 16:03
  • 3
    that formula does not work if keepalive is set to 0s (disabled) – Tilo Mar 21 '13 at 05:12
  • thanks, Tilo. the better formula is: total amount of users you can serve in 1 second = worker_processes*worker_connections/(keepalive_timeout+time_required_to_serve_one_request) – Bulat May 12 '13 at 15:15
  • 6
    Each connection needs 2 file handles even for static files like images/JS/CSS. This is as 1 for the client's connection and the 2nd for opening the static file. Therefore, it's safer to change worker_rlimit_nofile = 2 * worker_connections. – Ethan May 19 '13 at 20:52
  • 4
    Use worker_rlimit_nofile but one should also call 'ulimit -n' to set the open file count value per process. This is better done in the init script. – Ethan May 19 '13 at 20:53
  • 2
    Thank you, Ethan. By our collective work, this topic becames encyclopedy of high-throughput nginxing :) – Bulat Jun 23 '13 at 18:19
  • 2
    You should remove `keepalive_timeout` from your formula. Nginx closes keepalive connections when the `worker_connections` limit is reached. – VBart Aug 20 '13 at 16:58
  • 1
    Rather than removing it, you should decide on a minimum keepalive timeout you want to grant everyone to connect and use that instead in the formula. – blubberdiblub Feb 10 '14 at 08:14
  • How to determine worker_connections? We could not keep on increase it to meet our need, could we? It should be based on CPU, Memory, etc. Correct? Then how? – Adam C. Feb 19 '14 at 14:42
  • 1
    Adam, every worker_connection (in the sleeping state) needs 256 bytes of memory, so you can increase it easily – Bulat Mar 17 '14 at 14:57