5

After upgrading to JBoss AS 5.1, running JRE 1.6_17, CentOS 5 Linux, the JRE process runs out of memory after about 8 hours (hits 3G max on a 32-bit system). This happens on both servers in the cluster under moderate load. Java heap usage settles down, but the overall JVM footprint just continues to grow. Thread count is very stable and maxes out at 370 threads with a thread stack size set at 128K.

The footprint of the JVM reaches 3G, then it dies with:

java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?

  Internal Error (allocation.cpp:117), pid=8443, tid=1667668880
  Error: ChunkPool::allocate

Current JVM memory args are:

-Xms1024m -Xmx1024m -XX:MaxPermSize=256m -XX:ThreadStackSize=128

Given these settings, I would expect the process footprint to settle in around 1.5G. Instead, it just keeps growing until it hits 3G.

It seems none of the standard Java memory tools can tell me what in the native side of the JVM is eating all this memory. (Eclipse MAT, jmap, etc). Pmap on the PID just gives me a bunch of [ anon ] allocations which don't really help much. This memory problem occurs when I have no JNI nor java.nio classes loaded, as far as I can tell.

How can I troubleshoot the native/internal side of the JVM to find out where all the non-heap memory is going?

Thank you! I am rapidly running out of ideas and restarting the app servers every 8 hours is not going to be a very good solution.

walton
  • 114
  • 1
  • 4
  • What do you mean by "running out of memory"? What are the symptoms? Exceptions? If so, do you get stack traces? – skaffman Jan 12 '10 at 21:19
  • Yes JVM dies since it has grown to 3G - the max on a 32-bit Linux system. I added the error to the question. Thanks. – walton Jan 12 '10 at 21:28
  • Wow, a hotspot crash.... that should *never* happen, not even when it's memory starved. Sadly, the non-heap memory pools are not exposed via the profiling interface, so the contents remain a mystery. – skaffman Jan 12 '10 at 22:36
  • This is interesting: http://www.codingthearchitecture.com/2008/01/14/jvm_lies_the_outofmemory_myth.html – skaffman Jan 12 '10 at 22:41
  • This question may help you: http://stackoverflow.com/questions/1888351 The OP in that case was running on HP, so I'm not marking your question as a dupe, but I suspect the answer is the same (upgrade the JVM). My response points you to *pmap*, which is a tool to examine the virtual memory space. – kdgregory Jan 13 '10 at 13:33
  • Thanks. PMAP just gives me scattered [anon] allocations, which contribute to the 3G process limit. I think I am looking at one of two possible issues: 1. Java Objects not releasing native resources (most probable) 2. JVM native memory leak. (not probable as I am seeing this on both 1.6.16 and 1.6.17) Trying to track down #1 has been the challenging part. – walton Jan 14 '10 at 00:34
  • See http://stackoverflow.com/questions/26041117/growing-resident-memory-usage-rss-of-java-process – Lari Hotari Aug 25 '16 at 11:53

3 Answers3

0

As @Thorbjørn suggested, profile your application.

If you need more memory, you could go for a 64bit kernel and JVM.

John Doe
  • 869
  • 5
  • 10
-1

Attach with Jvisualvm in the JDK to get an idea on what goes on. jvisualvm can attach to a running process.

Thorbjørn Ravn Andersen
  • 73,784
  • 33
  • 194
  • 347
  • But Visual VM only looks at the heap; it tells you little or nothing about anything else except perm gen. – duffymo Jan 18 '13 at 00:27
-1

Walton: I had similar issue, posted my question/finding in https://community.jboss.org/thread/152698 . Please try adding -Djboss.vfs.forceCopy=false to java start up parameter to see if it helps. WARN: even if it cut down process size, you need to test more to make sure everything all right.