4
httperf ... --rate=20 --send-buffer=4096 --recv-buffer=16384 --num-conns=100 --num-calls=10

Gives 1000 requests as expected on nginx.

Total: connections 100 requests 1000 replies 1000 test-duration 5.719 s

Connection rate: 17.5 conn/s (57.2 ms/conn, <=24 concurrent connections)
Connection time [ms]: min 699.0 avg 861.3 max 1157.5 median 840.5 stddev 119.5
Connection time [ms]: connect 56.9
Connection length [replies/conn]: 10.000

Request rate: 174.8 req/s (5.7 ms/req)
Request size [B]: 67.0

Reply rate [replies/s]: min 182.0 avg 182.0 max 182.0 stddev 0.0 (1 samples)
Reply time [ms]: response 80.4 transfer 0.0
Reply size [B]: header 284.0 content 177.0 footer 0.0 (total 461.0)
Reply status: 1xx=0 2xx=1000 3xx=0 4xx=0 5xx=0

CPU time [s]: user 1.42 system 4.30 (user 24.9% system 75.1% total 100.0%)
Net I/O: 90.2 KB/s (0.7*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

On same hardware querying uWSGI on port 8000 results 200 requests and 100 replies, and 100 reset connections. What's wrong? The server is extremely powerful.

Total: connections 100 requests 200 replies 100 test-duration 5.111 s

Connection rate: 19.6 conn/s (51.1 ms/conn, <=5 concurrent connections)
Connection time [ms]: min 69.5 avg 128.4 max 226.8 median 126.5 stddev 27.9
Connection time [ms]: connect 51.4
Connection length [replies/conn]: 1.000

Request rate: 39.1 req/s (25.6 ms/req)
Request size [B]: 67.0

Reply rate [replies/s]: min 19.8 avg 19.8 max 19.8 stddev 0.0 (1 samples)
Reply time [ms]: response 68.8 transfer 8.2
Reply size [B]: header 44.0 content 2053.0 footer 0.0 (total 2097.0)
Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0

CPU time [s]: user 1.87 system 3.24 (user 36.6% system 63.4% total 100.0%)
Net I/O: 42.6 KB/s (0.3*10^6 bps)

Errors: total 100 client-timo 0 socket-timo 0 connrefused 0 connreset 100
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
Dmitry
  • 2,068
  • 2
  • 21
  • 30

2 Answers2

6

This is the more logic answer:

http://projects.unbit.it/uwsgi/wiki#Wherearethebenchmarks

The listen queue size is reported on uWSGI startup logs.

But as you have not reported your uWSGI config, it is impossibile to give you the right hint.

roberto
  • 12,723
  • 44
  • 30
  • There is no configuration per se, I just installed it and run it through command line. It does report 100 connection limit on sockets but when I set -l 500 for example, I get the same result. The link you provided mentions tuning the OS queue limit — is this the way to go? I don't think 100 is a lot for a 16 core Nehalem. – Dmitry Dec 15 '11 at 08:16
  • 3
    i suppose your "no configuration" means that you are running with only one process. Then the socket can only manage upto 101/102 concurrent connections. Yes, you have to kernel tune to increase socket backlog, but you have to work on number of processes/threads too and eventually timeouts. Regarding the ratio 100/ 16 cores, you may want to read this http://www.manpagez.com/man/2/listen/ to understand what socket backlog is and how it works. – roberto Dec 15 '11 at 09:05
  • Thanks for the link. I did figure out later on, that running it without nginx is not going to provide as much performance. After setting this up properly it handles 2000req/s with Django (well, most I could measure) – Dmitry Dec 17 '11 at 00:57
  • @Dmitry I'd love to see your configuration, to view what could be improved on mine! If you can? – Cyril N. Jul 22 '18 at 21:00
1

Just use <listen>1024</listen> directive in your uWSGI config.