I have a Spring application (openjdk 11.0.6
) running on a linux docker image (openjdk:11-jre-slim
) deployed to k8s. I started investigating the app's memory usage so that I can size memory request
and limit
values for the pod, and noticed some strange memory usage. Highlights are as follows:
- 4GB node
- 1GB pod memory limit
ps -o pid,user,%mem,command ax | sort -b -k3 -r
1 root 20.4 java ...
2565 root 0.0 sort -b -k3 -r
2564 root 0.0 ps -o pid,user,%mem,command ax
2388 root 0.0 bash
102 root 0.0 bash
PID USER %MEM COMMAND
jcmd 1 VM.native_memory
Native Memory Tracking:
Total: reserved=1933717KB, committed=565453KB
- Java Heap (reserved=245760KB, committed=223124KB)
(mmap: reserved=245760KB, committed=223124KB)
- Class (reserved=1169540KB, committed=136324KB)
(classes #25495)
( instance classes #23876, array classes #1619)
(malloc=4228KB #78404)
(mmap: reserved=1165312KB, committed=132096KB)
( Metadata: )
( reserved=116736KB, committed=115200KB)
( used=112833KB)
( free=2367KB)
( waste=0KB =0.00%)
( Class space:)
( reserved=1048576KB, committed=16896KB)
( used=15552KB)
( free=1344KB)
( waste=0KB =0.00%)
- Thread (reserved=166301KB, committed=25601KB)
(thread #161)
(stack: reserved=165500KB, committed=24800KB)
(malloc=579KB #809)
(arena=222KB #321)
- Code (reserved=254614KB, committed=82974KB)
(malloc=6926KB #21050)
(mmap: reserved=247688KB, committed=76048KB)
- GC (reserved=1830KB, committed=1758KB)
(malloc=1022KB #3092)
(mmap: reserved=808KB, committed=736KB)
- Compiler (reserved=2121KB, committed=2121KB)
(malloc=1991KB #2720)
(arena=131KB #5)
- Internal (reserved=33603KB, committed=33603KB)
(malloc=33563KB #122858)
(mmap: reserved=40KB, committed=40KB)
- Other (reserved=855KB, committed=855KB)
(malloc=855KB #109)
- Symbol (reserved=25799KB, committed=25799KB)
(malloc=23073KB #310022)
(arena=2725KB #1)
- Native Memory Tracking (reserved=9062KB, committed=9062KB)
(malloc=447KB #6338)
(tracking overhead=8616KB)
- Shared class space (reserved=17084KB, committed=17084KB)
(mmap: reserved=17084KB, committed=17084KB)
- Arena Chunk (reserved=6353KB, committed=6353KB)
(malloc=6353KB)
- Logging (reserved=4KB, committed=4KB)
(malloc=4KB #184)
- Arguments (reserved=18KB, committed=18KB)
(malloc=18KB #481)
- Module (reserved=773KB, committed=773KB)
(malloc=773KB #4712)
kubectl top pod app
NAME CPU(cores) MEMORY(bytes)
app-XXXXXXX 661m 827Mi
To sum up:
- k8s reports pod memory usage of
827Mi
jcmd
reports total memory usage of ~560MB
ps
reports 20.4% of system. As I understand, that's a percent of the node's memory so ~796MB
So I have over 200MB of memory usage reported by ps
which jcmd
doesn't account for. What am I missing? Does the JVM allocate memory even jcmd
can't track? As mentioned, my goal is to determine what limits to set on my pods (I'd rather not have them killed under normal usage).
Thanks for taking the time!