0

I have 2 questions regarding the resident memory used by a Java application.

Some background details:

  • I have a java application set up with -Xms2560M -Xmx2560M.
  • The java application is running in a container. k8s allows the container to consume up to 4GB.

The issue:

Sometimes the process is restarted by k8s, error 137, apparently the process has reached 4GB.

Application behaviour:

  • Heap: the application seems to work in a way where all memory is used, then freed, then used and so on.

This snapshot illustrates it. The Y column is the free heap memory. (extracted by the application by ((double)Runtime.getRuntime().freeMemory()/Runtime.getRuntime().totalMemory())*100 )

enter image description here

I was also able to confirm it using HotSpotDiagnosticMXBean which allows creating a dump with reachable objects and one that also include unreachable objects.

The one with the unreachable was at the size of the XMX.

In addition, this is also what i see when creating a dump on the machine itself, the resident memory can show 3GB while the size of the dump is 0.5GB. (taken with jcmd)

First question:

Is this be behaviour reasonable or indicates a memory usage issue? It doesn't seem like a typical leak.

Second question

I have seen more questions, trying to understand what the resident memory, used by the application, is comprised of.
Worth mentioning:

Java using much more memory than heap size (or size correctly Docker memory limit)

And

Native memory consumed by JVM vs java process total memory usage

Not sure if any of this can account for 1-1.5 GB between the XMX and the 4GB k8s limit.

If you were to provide some sort of a check list to close in on the problem what will it be? (feels like i can't see the forest for the trees)

Any free tools that can help? (beside the ones for analysing a memory dump)

user12396421
  • 175
  • 1
  • 10
  • 1
    In some cases, a process may consume 20GB memory, while the heap is only 2GB. Whether this is normal or not, depends solely on the application - we can't say without knowing *your* particular application. E.g. it's quite typical for Cassandra, Elasticsearch, or other Java processes dealing with lots of mapped files, to have RSS much more than the heap size. – apangin Dec 19 '20 at 13:26
  • 1
    The posts you've mentioned already have detailed answers. They also list a bunch of tools for analyzing native memory issues: Native Memory Tracking, pmap, jemalloc, async-profiler. Take a look at the related [video](https://vimeo.com/364039638) which describes popular native memory issues and demonstrates how to solve them. – apangin Dec 19 '20 at 13:34

2 Answers2

0

You allocate 2.5 GB for the heap, the JVM itself and the OS components will take also some memory (the rule of thump is here 1 GB, but the real figures may differ significantly, especially when running in a container), so we are already at 3.5 GB.

Since Java 8, the JVM will store the code for the classes not longer on the heap, but in an area called 'metaspace'; depending on what your program is doing, how many classes and how many ClassLoaders it uses, this area may grow easily above 0.5 GB. This needs to be considered, in addition to those stuff mentioned in the linked posts.

tquadrat
  • 3,033
  • 1
  • 16
  • 29
0

As well as the answer posted by tquadrat you also have to consider what would happen when the application uses native memory mapped by byte buffers which is outside of the heap space but taken up by the process.

AlBlue
  • 23,254
  • 14
  • 71
  • 91
  • There are some great resources on this topic, like https://www.baeldung.com/native-memory-tracking-in-jvm and https://shipilev.net/jvm/anatomy-quarks/12-native-memory-tracking/ – AlBlue Dec 19 '20 at 12:53