16

I'm working on a debian server with tomcat 7 and java 1.7. This is an application that recieves several TCP connections, each TCP connection is an open file by the java process.

Looking at /proc/pid of java/fd I found that, sometimes, the number of open files exceeds 1024, when this happens, I find in catalina.out log the stacktrace _SocketException: Too many open files_

Everything I find about this error, people refer to the ulimit, I have already changed this thing and the error keeps happening. Here is the config:

at /etc/security/limits.conf

root    soft    nofile  8192
root    hard    nofile  8192

at /etc/sysctl.conf

fs.file-max = 300000

the ulimit -a command returns:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 8192
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

But, when I check the limits of the java process, it's only 1024

at /proc/pid of java/limits

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             32339                32339                processes 
Max open files            1024                 1024                 files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       32339                32339                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        

How can I increase the number of Max open files for the java process?

slacker
  • 478
  • 1
  • 3
  • 10

3 Answers3

20

I just put the line ulimit -n 8192 inside the catalina.sh, so when I do the catalina start, java runs with the specified limit above.

slacker
  • 478
  • 1
  • 3
  • 10
  • 5
    You may need to add a line in `/etc/security/limits.conf` before it will let you increase the ulimit this way. – Shadow Man Jul 12 '13 at 18:40
  • 2
    I think we should avoid to modify catalina for that. When you will upgrade Catalina, you can forget this change on the new version. – mcoolive Feb 20 '15 at 09:26
  • 1
    As far as I understood, the limits.conf file is applied by PAM upon login. So it might be possible that if you launch Tomcat via your init system, the PAM "limit" configuration is not set and you "fallback" to the default of 1024. You need to check your init system manual how to set this limit. Or use the trick in the response here. – Huygens Nov 23 '15 at 12:41
  • 2
    Like @mcoolive mentioned, catalina.sh isn't the best place for this. Wrap catalina.sh in your own script that sets the ulimit and then starts tomcat. This way, when you upgrade tomcat, you won't lose your ulimit setting. – threejeez Jul 31 '16 at 19:25
11

The ulimit values are assigned at session start-up time, so changing /etc/security/limits.conf will not have any effect to processes that are already running. Non-login processes will inherit the ulimit values from their parent, much like the inheritance of environment variables.

So after changing /etc/security/limits.conf, you'll need to logout & login (so that your session will have the new limits), and then restart the application. Only then will your application be able to use the new limits.

telcoM
  • 111
  • 1
  • 2
1

Setting higher ulimit maybe completely unnecessary depending on the workload/traffic that the tomcat/httpd handles. Linux creates a file descriptor per socket connection, so if tomcat is configured to use mod_jk/ajp protocol as a connector then you may want to see if the maximum allowed connection is too high or if the connectionTimeout or keepAliveTimeout is too high. These parameters play a huge role in consumption of OS file descriptors. Sometimes it may also be feasible to limit the number of apache httpd/nginx connection if tomcat is fronted by a reverse proxy. I once reduce serverLimit value in httpd to throttle incoming requests during gaterush scenario. All in all adjusting ulimit may not be a viable option since your system may end up consuming however many you throw at it. You will have to come up with a holistic plan to solve this problem.

ChaitanyaBhatt
  • 1,158
  • 14
  • 11
  • Hello, sorry but I couldn't find any other useful information about this. What do you mean with `Linux creates a file descriptor per socket connection`? Does it mean that the "too many open files" could be due to too many incoming connections at once? And where is configured the mod_jk/ajp protocol as a connector? – ocramot Apr 26 '16 at 09:07