0

please help me to tune my servers when number of connection in one time more that 500 my httpd just freezes and stop responding.

I'm having same configuration since 2008 and now have added some RAM now I have 32 Gb on each of two servers

First one has installed RHEL 5 64 bit 2.6.18-53.1.4.el5xen it delivers jnlp to clients who after connected to second server

Apache 2.2.3 httpd.conf

<IfModule prefork.c>
StartServers       8
MinSpareServers   10
MaxSpareServers   75
ServerLimit      1100
MaxClients       1100
MaxRequestsPerChild  4000
</IfModule>

<IfModule worker.c>
StartServers         2
MaxClients         150
MinSpareThreads     25
MaxSpareThreads     75
ThreadsPerChild     25
MaxRequestsPerChild  0
</IfModule>

java 1.6.0_27 process with following parameters

java -server -Xmx1280M -XX:MaxPermSize=256M -Djava.awt.headless=true ...

tomcat server.xml with following parameters

 ...   connectionTimeout="12000" maxSpareThreads="250" protocol="AJP/1.3" 
 maxHttpHeaderSize="8192" disableUploadTimeout="true" minSpareThreads="25" 
 useBodyEncodingForURI="true" maxThreads="500" acceptCount="100" 
 enableLookups="false" ...

Second server has only JVM java process HW same as above freezes when number of users connected more than 600 I have changed Xmx from 4000m to 26g yesterday with hope that this will allow to use more RAM for this process. but don't see that it uses more than 4g in top.

java -server -Xmx26g -Djava.awt.headless=true -Dfile.encoding=UTF-8 -jar

top - 01:34:10 up 252 days,  8:02,  1 user,  load average: 0.00, 0.02, 0.00
Tasks: 127 total,   1 running, 126 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.8%us,  0.2%sy,  0.0%ni, 97.7%id,  0.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  37049860k total,  3225916k used, 33823944k free,   331836k buffers
Swap: 10223608k total,        0k used, 10223608k free,  2409808k cached

top - 03:57:04 up 252 days,  8:02,  1 user,  load average: 0.01, 0.02, 0.00
Tasks: 145 total,   1 running, 144 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni, 99.8%id,  0.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  37108368k total, 36117600k used,   990768k free,   218364k buffers
Swap:  2031608k total,      120k used,  2031488k free, 33518948k cached

please help me to solve this. I'm planing to upgrade all this but not sure if my java program will work with new versions of rhel apache tomcat and jvm.

ppeterka
  • 20,583
  • 6
  • 63
  • 78
Olzh
  • 123
  • 3
  • 1
    What does the JVM do when the system gets unresponsive? What does the underlying operating system do? – Thorbjørn Ravn Andersen Aug 11 '16 at 19:01
  • Is the JVM 64 bit? Also, 1.6 is kinda old, I'd be a bit worried about security issues... – ppeterka Aug 11 '16 at 19:07
  • JVM working, operating system very slow, second server works as a web conferencing. bouth servers oprating systems are ok. – Olzh Aug 11 '16 at 19:08
  • `my httpd just freezes and stop responding.` Ok I'm confused... The Tomcat, or the Apache in front of it is causing the issues? – ppeterka Aug 11 '16 at 19:09
  • yes 64 bit all, i know that it;s old and panning to update in nearest future but need to tune apache and heap sizes to meet new ram and give them some performance push – Olzh Aug 11 '16 at 19:10
  • yes tomcat and apache problems when more than 500 connections at one time – Olzh Aug 11 '16 at 19:11

1 Answers1

0

Most likely this is your issue on the Tomcat side

maxThreads="500"

in the server.xml... Raise it, that will allow more concurrent connections.

maxThreads

The maximum number of request processing threads to be created by this Connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200. If an executor is associated with this connector, this attribute is ignored as the connector will execute tasks using the executor rather than an internal thread pool.

From Apache Tomcat documentation

Also, this part explains why 600 clients can connect:

acceptCount="100"

This enables another 100 clients to wait in a queue until served

acceptCount

The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 100.

Note: In order to be able to verify the memory situation, you can connect to Tomcat using JMX (jconsole, jvisualvm, etc...), and also, you can check a lot of the actual settings, as described in the Monitoring Tomcat FAQ.

Community
  • 1
  • 1
ppeterka
  • 20,583
  • 6
  • 63
  • 78
  • how about other xmx options and httpd.conf oprtions? – Olzh Aug 11 '16 at 19:22
  • @Olzh : do you have proof that the heap is full? What does connecting through JMX show in that regards? If you have GC logging turned on, what does it show? – ppeterka Aug 11 '16 at 19:24
  • i have increased to 1500 tomorrow will be large session, how to check is tomcat using this parameter or not – Olzh Aug 11 '16 at 19:25
  • @Olzh just connect through JMX - you'll be able to see what is goning on. – ppeterka Aug 11 '16 at 19:29
  • no it's just my thoughts, i had some logs asking to increase heapsize few months ago. tell me where to check, thank you for helping me @ppeterka – Olzh Aug 11 '16 at 19:29
  • @Olzh check this question: [Connecting remote tomcat JMX instance using jConsole](http://stackoverflow.com/questions/1263991/connecting-remote-tomcat-jmx-instance-using-jconsole) It has some details on how to start Tomcat to be able to connect using `jconsole`. That shows all the data you need. – ppeterka Aug 11 '16 at 19:32