0

I'm running VPS (nginx version 1.18.0 on Ubuntu) in Charlotte, NC (QEMU Virtual CPU version 2.5+ 2399.998 MHz, up to 3.2 Ghz in turbo; 2gb RAM) and TTFB of my Wordpress sites aren't great. It's around 200-300ms in America, a bit worse in Canada and Europe and way off Asia/Australia.

I talked to VPS support, they moved site to FastCGI (Nginx + PHP-FPM) and added some code to nginx.conf but it improved TTFB slightly. I also enabled HTTP/2 via ISP Manager.

Anyway, here's full configuration:

user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
    worker_connections 768;
    multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    # gzip on;

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

#Optimization
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
fastcgi_max_temp_file_size 0;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
fastcgi_connect_timeout 500;
fastcgi_send_timeout 500;
fastcgi_read_timeout 500;
client_header_timeout 1m;
client_header_buffer_size 2k;
client_body_buffer_size 256k;
ssl_buffer_size 4k;
large_client_header_buffers 4 8k;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
    include /etc/nginx/vhosts/*/*.conf;
    client_max_body_size 1024m;
    server {
        server_name localhost;
    disable_symlinks if_not_owner;
    listen 80;
    listen [::]:80;
    include /etc/nginx/vhosts-includes/*.conf;
    location @fallback {
        error_log /dev/null crit;
        proxy_pass http://127.0.0.1:8080;
        proxy_redirect http://127.0.0.1:8080 /;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        access_log off ;
    }
    }
}

#Optimisation is what was added by VPS support.

I played with numbers a bit, but to no avail. What can you suggest?

Here's detailed TTFB (used tools.keycdn.com): click

nginxnoob
  • 1
  • 1
  • You have mentioned nothing of what *are* your websites. NGINX can serve both static and dynamic resources. For the latter case, the prevalent TTFB is virtually always with the "app", not NGINX at all. Furthermore, it's natural for TTFB to be further worse on geographical distance from the origin server. If this bothers you, you have to use a CDN. – Danila Vershinin Sep 27 '21 at 19:21
  • Sorry, my websites are on WordPress. I know about geographical staff, but A) It's really bad as you can see from the image B) Sites can have low TTFB even for remote connections without using CDN C) CDN doesn't help much from what I've tested – nginxnoob Sep 28 '21 at 07:06
  • I just want to know whether I can do anything to improve TTFB or it's server problem/location. – nginxnoob Sep 28 '21 at 07:26
  • Check [this analysis](https://stackoverflow.com/a/68402917/2834978), may be it gives a different point of view on the problem. – LMC Oct 10 '21 at 23:55

0 Answers0