1

According to the top command the Java/Tomcat process is using 77.9% of available memory (32GB physical * 77.9% = 25.32GB):

top command

TYPE SIZE
VIRT: 32.5 GB
RES: 24.5 GB
SHR: 24.9 MB

The JVM -Xmx option has been set for this application to permit the JVM 12GB, which seems to be respected.

However, our monitoring tools keep setting off alarm bells as memory usage on the box slowly increments towards the physical limit and then we have to restart Tomcat to bring it back down.

The 12 GB JVM maximum is also reflected in the blue/green Heap diagram provided by JProfiler. The other diagram provided by JProfiler shows 280.7 MB of non-heap memory allocated.

Heap/JProfiler

Non-Heap/JProfiler

None of the instrumentation in JProfiler seems to help explain the gap between the 12 GB JVM maximum and the 25 GB or more reported by top.

JProfiler consistently shows Heap memory usage hovers between 4 GB and 7 GB.

I'm considering profiling with a Linux-native memory profiler (Valgrind) to see if that can reveal more information but are there any other features in JProfiler that could help explain the gap?

I've used the JProfiler tools: Heap Walker, Live Memory and allocation recorder but the application's memory usage seems healthy and unremarkable.

It's definitely a powerful tool, which makes me wonder what I'm missing here.

Other things I've tried include looking at proc/PID/maps | wc -l for the same process - out of a max_map_count of 65,530, the process is using 845. This seems to indicate that even if memory-mapped files may be involved, the number is well below the max. I'm just not sure how to determine how much total memory space that might be taking up.

I also compared top with atop which is consistent.

I also ran pmap against the process but wasn't sure how to interpret the massive text dump that it produces. For that reason, I'm hoping Valgrind is more human-friendly.

pio1dcaqr
  • 25
  • 6
  • I'm wondering if you're having any problems beside you're monitoring tools. Do you see any performance degradation? Slow throughput or increase in latency? Anything else? – markspace Feb 21 '23 at 16:01
  • The Java heap is where Java stores objects, it is not the only memory used by Java. There is also memory to store classes and metadata, and native (a.k.a. direct) memory, and memory for thread stacks, memory for garbage collectors, etc. – Mark Rotteveel Feb 21 '23 at 16:13
  • @markspace Not that I know of. That said, I don't have very precise monitoring in place so it's possible. But anecdotally, I can say performance seems to be very good at all times even under heavy load. – pio1dcaqr Feb 21 '23 at 17:27
  • @MarkRotteveel Any suggestions for how to get more detailed info about allocation in those other areas of memory? Maybe I was naïve to think that JProfiler would magically point me to the answer, but are there any other tools or techniques I could employ here to get closer to an explanation of the high allocation? Also, to your point, my assumption would have been that those non-heap memory areas would't consume more memory than the app itself. If the app always consumes less than 10 GB of heap memory, I don't understand how to get to an explanation for non-heap memory being twice that amount. – pio1dcaqr Feb 21 '23 at 17:36
  • @MarkRotteveel The other thing is, JProfiler says non-heap memory is around 300 MB while the `top` `RES` value implies that number is closer to 25 GB. That's a big discrepancy. – pio1dcaqr Feb 21 '23 at 17:38

1 Answers1

1

Process size and the heap size in the JVM are not the same, see

How can I measure the actual memory usage of an application or process?

for a discussion of process size on Linux.

If your heap size stays constant and the process size is increasing all the time, there must be a native memory leak. JProfiler cannot detect such a leak because it is only concerned with Java memory usage. The "Non-heap memory" that is reported by JProfiler is derived from bookkeeping by the JVM. It would not include a memory leak from a native library that is used via JNI.

Ingo Kegel
  • 46,523
  • 10
  • 71
  • 102