1

I have an Ubuntu with nginx/php on it with couple of virtual hosts.

The problem is, that after some time all sites are horribly laggy and then I got "404 Not Found", when the files are there.

If I do "service nginx restart", everything backs to normal.

I don't really know wheres the problem (nginx or php maybe?), and how to check that. I'm guessing that its because my server run out of sockets that handle the connections, but I have a really strong machine when there is fairly small traffic so it could handle it all on all 3 sites with no problem.

The RAM is used in like 20-20% and there is ~65% storage left on partition. Could someone help me with analyzing and fix my problem?

Michael Benjamin
  • 346,931
  • 104
  • 581
  • 701
erexo
  • 503
  • 1
  • 6
  • 24
  • One idea / thought: Are you using any websocket library in your php sites? Are websockets closed properly, releasing the underlying socket file descriptors? No matter how strong your machine, file descriptors (sockets) are a limited resource by design... – Myst Jul 13 '16 at 01:42
  • It seems to me that you should concentrate on the PHP side of things, as it's more likely things are funky on that side. An nginx issue like that would have been noticed by many. Also, nginx is still responsive, it's alerting you to the fact the the resource it required (the PHP application) couldn't be found - I think this supports the idea that the PHP applications are failing somehow. – Myst Jul 13 '16 at 01:45
  • Is there any way to check my socket resources? How many are used and how/where? – erexo Jul 13 '16 at 01:47
  • As I said I have three different sites running on that nginx so looking file by file will take ages – erexo Jul 13 '16 at 01:48
  • Yes, there is a way to look at the resource used by a process, but this is OS specific. A quick google search will tell you how to do this for [unix](http://www.cyberciti.biz/faq/howto-linux-get-list-of-open-files/), [mac](http://stackoverflow.com/questions/20974438/get-list-of-open-files-descriptors-in-os-x), etc'.... But consider that nginx probably would have returned a 503 error or some other busy signal if the issue was the number of available file handles. – Myst Jul 13 '16 at 01:54
  • Thanks for your reply, but I dont really know how it could help me. I've already checked that and I have like 9 processes opened by www-data (nginx) and every uses a couple of sockets + log files. When everything goes down, how to determine which one is failing and when it does, how to check why? – erexo Jul 13 '16 at 02:05
  • Did you try disabling one site at a time to see if the issue persists (I assume the issue is the same on your testing machine and that it isn't limited to the production environment)? It could help you asses which one is causing the issue, so you only need to review that site's code. – Myst Jul 13 '16 at 02:07
  • Yes its not related to production environment (I've didnt messed up with settings and configuration) although I've read that you can extend socket limits and make unix to manage sockets smarter but it could be php problem. Also disabling one of sites is impossible, even if I did, I still dont know where exactly it drains all the traffic. Its not an critical error, just annoying bug that shows up from time to time. – erexo Jul 13 '16 at 02:12
  • I'm sorry I couldn't help. If nothing comes up on the error logs and you can't isolate the issue, I'm not sure I know what to recommend. [Having a larger open file limit (linux](https://easyengine.io/tutorials/linux/increase-open-files-limit/) wouldn't necessarily resolve this one, although it might help with how often the issue creeps up. – Myst Jul 13 '16 at 02:18
  • Okay, thanks so much for you'r time. – erexo Jul 13 '16 at 02:32

0 Answers0