I am investigating out of memory issues with a java application running in a docker container orchestrated by mesos marathon.
- the container is set to 2GB memory
- JVM heap is explicitly set to 1Gb min and 1.5GB max
- constant test workload and then eventual container exit code 137 (OOM).
- compared two javacores at beginning of the test and after 1 hour, noticed something called JIT "Other" has the most delta
- no issues are seen with JVM heap usage
Initial
2MEMUSER +--JIT: 318,789,520 bytes / 778 allocations
2MEMUSER | |
3MEMUSER | +--JIT Code Cache: 268,435,456 bytes / 1 allocation
2MEMUSER | |
3MEMUSER | +--JIT Data Cache: 16,777,728 bytes / 8 allocations
2MEMUSER | |
3MEMUSER | +--Other: 33,576,336 bytes / 769 allocations
After 1 hour
2MEMUSER +--JIT: 525,843,728 bytes / 8046 allocations
2MEMUSER | |
3MEMUSER | +--JIT Code Cache: 268,435,456 bytes / 1 allocation
2MEMUSER | |
3MEMUSER | +--JIT Data Cache: 62,916,480 bytes / 30 allocations
2MEMUSER | |
3MEMUSER | +--Other: 194,491,792 bytes / 8015 allocations
I wanted to know if a core dump with the Eclipse Memory Analyzer Tool (MAT) might shed light on what is in this "Other" space.
We have tried to limit JIT memory usage by following this discussion
*-Xjit:disableCodeCacheConsolidation
-Xcodecachetotal128m*
but can't seem to get the args to work.
We are using IBM JRE 1.8.0 Linux amd64-64 (build 8.0.5.17 - pxa6480sr5fp17-20180627_01(SR5 FP17))
Can people please share tools/experience troubleshooting JIT native memory consumption?