5

So, I have a simple Flask API application that is running on gunicorn running tornado workers. The gunicorn command line is:

gunicorn -w 64 --backlog 2048 --keep-alive 5 -k tornado -b 0.0.0.0:5005 --pid /tmp/gunicorn_api.pid api:APP

When I run Apache Benchmark from another server directly against gunicorn, here are the relevant results:

ab -n 1000 -c 1000 'http://****:5005/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second:    2823.71 [#/sec] (mean)
Time per request:       354.144 [ms] (mean)
Time per request:       0.354 [ms] (mean, across all concurrent requests)
Transfer rate:          2669.29 [Kbytes/sec] received

So we're getting close to 3k reqs/sec for performance.

Now, I need SSL. So I'm running nginx as a reverse proxy. Here is what the same benchmark looks against nginx on the same server:

ab -n 1000 -c 1000 'https://****/v1/location/info?location=448&ticket=55384&details=true&format=json&key=****&use_cached=true'
Requests per second:    355.16 [#/sec] (mean)
Time per request:       2815.621 [ms] (mean)
Time per request:       2.816 [ms] (mean, across all concurrent requests)
Transfer rate:          352.73 [Kbytes/sec] received

That's a drop in performance of 87.4%. But for the life of me, I cannot figure out what is wrong with my nginx setup. Which is this:

upstream sdn_api{
    server 127.0.0.1:5005;

    keepalive 100;
}

server {
        listen [::]:443;

    ssl on;
    ssl_certificate /etc/ssl/certs/api.sdninja.com.crt;
    ssl_certificate_key /etc/ssl/private/api.sdninja.com.key;
    ssl_protocols SSLv3 TLSv1;
    ssl_ciphers ALL:!kEDH:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM;
    ssl_session_cache shared:SSL:10m;

    server_name api.*****.com;
    access_log  /var/log/nginx/sdn_api.log;

    location / {
        proxy_pass http://sdn_api;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        client_max_body_size 100M;
        client_body_buffer_size 1m;
        proxy_intercept_errors on;
        proxy_buffering on;
        proxy_buffer_size 128k;
        proxy_buffers 256 16k;
        proxy_busy_buffers_size 256k;
        proxy_temp_file_write_size 256k;
        proxy_max_temp_file_size 0;
        proxy_read_timeout 300;
    }

}

And my nginx.conf:

user www-data;
worker_processes 8;
pid /var/run/nginx.pid;

events {
    worker_connections 2048;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip off;
    gzip_disable "msie6";

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # nginx-naxsi config
    ##
    # Uncomment it if you installed nginx-naxsi
    ##

    #include /etc/nginx/naxsi_core.rules;

    ##
    # nginx-passenger config
    ##
    # Uncomment it if you installed nginx-passenger
    ##

    #passenger_root /usr;
    #passenger_ruby /usr/bin/ruby;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

So does anyone have any idea why it's running so slow with this config? Thanks!

George Sibble
  • 173
  • 1
  • 5
  • Check this out: http://stackoverflow.com/questions/149274/http-vs-https-performance – wawawawa Jan 28 '13 at 14:17
  • Thanks wa! However, I changed from nginx to a hardware load balancer with ssl (it does all of the ssl work) and with the same cipher package as nginx, I'm getting over 1500 requests per second. It would probably be higher but the LB has a connection limit. Suffice it to say, the problem is not SSL, it's nginx. – George Sibble Jan 28 '13 at 21:26
  • 1
    Can you post a benchmark for nginx with SSL disabled? Can you also confirm that you're running 2 worker processes per cpu-core? What OS/arch are you running on? – Phillip B Oldham Jan 29 '13 at 16:21
  • Hi unpluggd. I went with another solution already, but the benchmark with SSL disabled was about the same as straight gunicorn. And I did have two workers per cpu-core. Ubuntu/12.04. – George Sibble Feb 04 '13 at 05:17
  • What happens if you bind gunicorn to a socket rather than an IP? `-b unix:/tmp/gunicorn.sock` and simplify your `nginx.conf` to the bare minimal: http://pastebin.com/THaZxUR7? – sjdaws Feb 24 '13 at 09:15

1 Answers1

2

A large part of HTTPS overhead is in the handshake. Pass -k to ab to enable persistent connections. You will see that the benchmark is now significantly faster.

Hongli
  • 18,682
  • 15
  • 79
  • 107