1

I have a rails application that is running on Nginx and Puma in production environment.

There is a problem with web page loading (TTBF delay), and I am trying to figure out a reason.

On backend side in production.log I see that my web page is rendered fast enough in 134ms:

Completed 200 OK in 134ms (Views: 49.9ms | ActiveRecord: 29.3ms)

But in browser I see that TTFB is 311.49ms:

enter image description here

I understand that there may be a problem in settings or processes count may be not optimal, but cannot find a a reason of ~177ms delay.. Will be grateful for some advices.

My VPS properties and configurations are listed below.

Environment

  • Nginx 1.10.3
  • Puma 3.12.0 (rails 5.2)
  • PostgreSQL
  • Sidekiq
  • ElasticSearch

VPS properties

  • Ubuntu 16.04 (64-bit)
  • 8 cores (2.4 GHz)
  • 16gb of RAM.
  • Network Bandwidth: 1000 Mbps

nginx.conf

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
  worker_connections 8096;
  multi_accept on;
  use epoll;
}

http {

  # Basic Settings
  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  types_hash_max_size 2048;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  # Logging Settings
  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  # Gzip Settings
  gzip on;
  gzip_disable "msie6";
  gzip_vary on;
  gzip_proxied any;
  gzip_comp_level 6;
  gzip_buffers 16 8k;
  gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
}

web_app.conf

upstream puma {
  server unix:///home/deploy/apps/web_app/shared/tmp/sockets/web_app-puma.sock fail_timeout=0;
}

log_format timings '$remote_addr - $time_local '
                   '"$request" $status '
                   '$request_time $upstream_response_time';

server {
  server_name web_app.com;

  # SSL configuration
  ssl on;
  listen 443 ssl http2;
  listen [::]:443 ssl http2;

  ssl_protocols TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
  ssl_prefer_server_ciphers on;
  ssl_buffer_size 4k;

  ssl_certificate  /etc/ssl/certs/cert.pem;
  ssl_certificate_key /etc/ssl/private/key.pem;

  root /home/deploy/apps/web_app/shared/public;

  access_log /home/deploy/apps/web_app/current/log/nginx.access.log;
  error_log /home/deploy/apps/web_app/current/log/nginx.error.log info;
  access_log /home/deploy/apps/web_app/current/log/timings.log timings;

  location ^~ /assets/ {
    #gzip_static on;
    expires max;
    add_header Cache-Control public;
    add_header Vary Accept-Encoding;
    access_log off;
  }

  try_files $uri/index.html $uri @puma;
  location @puma {
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_request_buffering off;
    proxy_pass http://puma;
  }

  error_page 500 502 503 504 /500.html;

  client_body_buffer_size 8K;
  client_max_body_size 10M;
  client_header_buffer_size 1k;
  large_client_header_buffers 2 16k;
  client_body_timeout 10s;
  keepalive_timeout 10;

  add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
}

puma.rb

threads 1, 6

port 3000

environment 'production'

workers 8

preload_app!

before_fork    { ActiveRecord::Base.connection_pool.disconnect! if defined?(ActiveRecord) }
on_worker_boot { ActiveRecord::Base.establish_connection        if defined?(ActiveRecord) }

plugin :tmp_restart
bmalets
  • 3,207
  • 7
  • 35
  • 64

1 Answers1

7

Check the true response time of the backend

The backend might claim it's answering/rendering in 130ms, that doesn't mean it's actually doing that. You can define a logformat like this:

log_format timings '$remote_addr - $time_local '
    '"$request" $status '
    '$request_time $upstream_response_time';

and apply it with:

access_log /var/log/nginx/timings.log timings;

This will tell how long the backend actually takes to respond.

Others possible way to debug

  • Check the raw latency between you and the server (i.e. with ping or by querying from the server itself)
  • Check how fast static content is served to get a baseline

Use caching

Add something like this to your location block:

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g 
             inactive=60m use_temp_path=off;
proxy_cache my_cache;

If your backend supports a "moddified since" header:

proxy_cache_revalidate on;

Disable buffering

You can instruct nginx to forward the responses from the backend without buffering them. This might reduce response time:

proxy_buffering off;

Since version 1.7.11 there also exists a directive that allows nginx to forward a reponse to a backend without buffering it.

proxy_request_buffering off;
Sheppy
  • 330
  • 1
  • 10
  • added timing log. so now: rails completes rendering in 174ms, nginx timing shows request_time 180ms and the same upstream_response_time 180ms – bmalets Dec 04 '18 at 19:19
  • Buffering is disabled too – bmalets Dec 04 '18 at 19:31
  • I am not sure about "proxy_cahe" because I have dynamic content on pages and CSRF tokens inside html forms.. – bmalets Dec 04 '18 at 19:37
  • Is this ~170ms delay is OK for my Nginx configurations and VPS properties? – bmalets Dec 04 '18 at 19:40
  • How high is the raw latency between you and your server? (ping) Have you checked by doing wget on localhost? – Sheppy Dec 04 '18 at 21:14
  • maximum ping time is `icmp_seq=20 ttl=58 time=20.066 ms` – bmalets Dec 05 '18 at 09:30
  • 170ms with only 20ms raw latency seem like a lot: i updated my answer, how fast is static content served, have you enabled both request and normal buffering (see edit) if all of this doesnt help you wont get around finding a possibility to cache some of the content – Sheppy Dec 05 '18 at 10:46
  • btw 300ms isn't the end of the world in terms of user experience, just saying – Sheppy Dec 05 '18 at 10:51
  • 3
    Just FYI, according to https://www.nginx.com/blog/nginx-caching-guide/, "NGINX does not cache responses if proxy_buffering is set to off. It is on by default." – M. Faraz Feb 22 '21 at 08:31
  • Beware that `proxy_buffering off;` puts more loads on NGINX. – Константин Ван Jun 07 '21 at 04:36