66

I am trying access kibana application deployed in nginx,but getting below

URL :- http://127.0.0.1/kibana-3.1.2

2015/02/01 23:05:05 [alert] 3919#0: *766 768 worker_connections are not enough while connecting to upstream, client: 127.0.0.1, server: , request: "GET /kibana-3.1.2 HTTP/1.0", upstream: "http://127.0.0.1:80/kibana-3.1.2", host: "127.0.0.1"

Kibana is deployed at /var/www/kibana-3.1.2

I have tried to increase the worker_connections,but still no luck,getting below in this case.

2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)

nginx.conf :-

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;

events {
        worker_connections 768;
        # multi_accept on;
}

And below in the location directive.

location /kibana-3.1.2{

        proxy_set_header X-Real-IP  $remote_addr;

        proxy_set_header X-Forwarded-For $remote_addr;

        proxy_set_header Host $host;

        proxy_pass http://127.0.0.1;

        add_header Access-Control-Allow-Origin *;

        add_header Access-Control-Allow-Headers *;
       }
dReAmEr
  • 6,986
  • 7
  • 36
  • 63

4 Answers4

124

Old question, but i had the same issue and the accepted answer didnt work for me.

I had to increase the number of worker_connections, as stated here.

/etc/nginx/nginx.conf

events {
    worker_connections 20000;
}
RASG
  • 5,988
  • 4
  • 26
  • 47
  • 8
    It's definitely not a solution to the problem. Temporary workaround, but not a solution. I came across the same problem, I have 4 environments with exactly the same config and on one of them this problem... – Krystian Oct 30 '20 at 16:22
  • This helped me. This may not be a DIRECT solution, but may be used with other solutions as well. – enjoi4life411 Jun 29 '23 at 17:26
50

Not quite enough info to say definitively, but based on the config you've provided, it looks like you have loop. You're proxying the requests to localhost:80, but NGINX is most likely listening on port 80. So, NGINX is connecting to itself over and over, hence the errors about too many open files.

Also, Kibana doesn't have any server-side code, so proxy_pass isn't appropriate here. Something like the following should be enough:

root /var/www/
location /kibana-3.1.2 {
    try_files $uri $uri/ =404;
}

With that being said, if you intend for this to be accessible from the public internet, you should protect it with a password and you should use proxy_pass in front of elasticsearch to control what requests can be made to it. But that's a different story :)

chrskly
  • 959
  • 7
  • 5
  • Thanks for your comment,yes you are right,there is no need to proxy_pass here.So now i just have root /var/www/; inside location directive and it's working – dReAmEr Feb 02 '15 at 03:57
  • Hello, I know this is quite old, but I have the same problem. By doing this suggestion, I got a forbidden error: "*23053 directory index of "/opt/bench/erpnext/sites/" is forbidden". Any hint, please? – jstuardo Mar 28 '19 at 11:53
  • @jstuardo Directory permissions. – Jivan Pal Oct 18 '22 at 23:26
  • @JivanPal Good and obvious answer. – jstuardo Oct 23 '22 at 20:14
0

Setting worker connections to a higher limit helped me. Just to add to @RASG, I came here as a result of using Apaches Load testing tool, ab, and started to see SSL handshake failures using batches of 500. Looking at the NGINX logs, I noticed errors similar to OPs, ...not enough worker_connections. Keep in mind more workers means more load on the server so even though I stopped the error message from occurring, after increasing the count, the site was EXTRMEELY bogged down. So finding that sweet spot depends on your server condtion. I will defintiely be adding a CPU (or new server instance). Running htop (Im on Debian/Ubuntu) is how I monitored how the server "adjusted" to the increase of workers. As mentioned here https://ubiq.co/tech-blog/how-to-fix-nginx-worker-connections-are-not-enough/

Please note, the number of workers are limited by the amount of memory available on your server. Also, as the number of workers increase, so will the memory consumption of NGINX server.

In my case RAM barely moved, but CPU usage was extrememly (after running htop)

enjoi4life411
  • 568
  • 5
  • 11
-2

If you are running this on docker containers with a connection to a php container, in yor nginx config or website config change fastcgi_pass 127.0.0.1:9000; to fastcgi_pass php:9000; This is because nginx points to localhost and thinks it is the same container that it his running instead of routing to another container