3

We are facing an issue where the Resident memory of the Java process grows gradually. We have Xmx defined at 4096 MB and XX:MaxPermSize=1536m. The number of active threads ~1500 with an Xss of 256K defined.

When the application server(JBoss 6.1) starts the resident memory used is ~5.6GB(have been using top command to monitor it); it gradually grows(around 0.3 to 0.5 Gb per day) till it grows to ~7.4 Gb, when the kernel's OOM killer kills the process due to shortage of RAM space(The server has 9GB of RAM).

We have been regularly monitoring the thread dump- no suspect of a thread leak. We are still unable to figure out from where this extra memory is coming from.

Pmap output shows a number of Anon blocks(apart from the regular blocks for stack and heap), mostly in arenas of 64 Mb, which are un-accounted for in terms of memory usage by heap, perm gen & stacks.

In the heap dump we have also tried looking for DirectByteBuffers and sun.misc.Unsafe objects, which are generally used for non-heap memory allocation, but the number of objects as well as the memory capacity seem nominal. Is it possible that there can still be un- freed native memory even after these objects are GCed? Any other classes that may result in using up non-heap memory?

Our application does have native calls on its own, but it's possible that some third party libs have them.

Any ideas on what could be causing this? Anything other detail/tool that could further help debugging such an increase? Any known issue that we should look out for? Platform : Jboss 6.1 running on Centos 5.6.

  • 2
    Assuming you've read Oracle's troubleshooting guide for memory leaks http://www.oracle.com/technetwork/java/javase/tools-141261.html – gknicker Dec 12 '14 at 08:17
  • This question is related: http://stackoverflow.com/questions/26041117/growing-resident-memory-usage-rss-of-java-process – Lari Hotari Feb 25 '16 at 19:16

2 Answers2

0

There is a known problem with Java and glibc >= 2.10 (includes Ubuntu >= 10.04, RHEL >= 6).

The cure is to set this env. variable: export MALLOC_ARENA_MAX=4

There is an IBM article about setting MALLOC_ARENA_MAX https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en

This blog post says

resident memory has been known to creep in a manner similar to a memory leak or memory fragmentation.

search for MALLOC_ARENA_MAX on Google or SO for more references.

You might want to tune also other malloc options to optimize for low fragmentation of allocated memory:

# tune glibc memory allocation, optimize for low fragmentation
# limit the number of arenas
export MALLOC_ARENA_MAX=2
# disable dynamic mmap threshold, see M_MMAP_THRESHOLD in "man mallopt"
export MALLOC_MMAP_THRESHOLD_=131072
export MALLOC_TRIM_THRESHOLD_=131072
export MALLOC_TOP_PAD_=131072
export MALLOC_MMAP_MAX_=65536
Lari Hotari
  • 5,190
  • 1
  • 36
  • 43
0

The increase in RSS usage might be caused by a native memory leak. A common problem is native memory leaks caused by not closing a ZipInputStream/GZIPInputStream.

A typical way that a ZipInputStream is opened is by a call to Class.getResource/ClassLoader.getResource and calling openConnection().getInputStream() on the java.net.URL instance or by calling Class.getResourceAsStream/ClassLoader.getResourceAsStream. One must ensure that these streams always get closed.

You can use jemalloc to debug native memory leaks by enabling malloc sampling profiling by specifying the settings in MALLOC_CONF environment variable. Detailed instructions are available in this blog post: http://www.evanjones.ca/java-native-leak-bug.html . This blog post also has information about using jemalloc to debug a native memory leak in java applications.

The same blog also contains information about another native memory leak related to ByteBuffers.

Lari Hotari
  • 5,190
  • 1
  • 36
  • 43