0

We have a pod running only a JVM process. The JVM memory occupies less memory compared to the memory occupied by Pod.

Below is the output of jmap -heap <PID>

Attaching to process ID 8, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.252-b14

using thread-local object allocation.
Garbage-First (G1) GC with 1 thread(s)

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 536870912 (512.0MB)
   NewSize                  = 1363144 (1.2999954223632812MB)
   MaxNewSize               = 318767104 (304.0MB)
   OldSize                  = 5452592 (5.1999969482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 100663296 (96.0MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 16777216 (16.0MB)

Heap Usage:
G1 Heap:
   regions  = 32
   capacity = 536870912 (512.0MB)
   used     = 138141888 (131.74237060546875MB)
   free     = 398729024 (380.25762939453125MB)
   25.730931758880615% used
G1 Young Generation:
Eden Space:
   regions  = 7
   capacity = 318767104 (304.0MB)
   used     = 117440512 (112.0MB)
   free     = 201326592 (192.0MB)
   36.8421052631579% used
Survivor Space:
   regions  = 1
   capacity = 16777216 (16.0MB)
   used     = 16777216 (16.0MB)
   free     = 0 (0.0MB)
   100.0% used
G1 Old Generation:
   regions  = 2
   capacity = 201326592 (192.0MB)
   used     = 3924160 (3.74237060546875MB)
   free     = 197402432 (188.25762939453125MB)
   1.9491513570149739% used

8145 interned Strings occupying 771960 bytes.

As you can see the JVM heap uses only 131MB but the pod memory consumption is around 450MB.

Below is the JVM args

java -Xmx512m -Xms128m -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -jar app.jar

The Java version that we are using is container aware. What could be the reason for such a gap in memory consumptions?

Update 1: Added pod yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2020-04-30T18:58:03Z"
  labels:
    pod-template-hash: 5c85b85966
    release: app-365b18d80ba58451c00b554a3429c9af-local
  name: pod-name
  namespace: default
  resourceVersion: "26701471"
spec:
  containers:
  - env:
    - name: LOG_LEVEL
      value: INFO
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    image: repo/name:tag
    imagePullPolicy: Always
    name: container-name
    resources:
      limits:
        memory: 640Mi
      requests:
        cpu: 100m
        memory: 512Mi
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-bwksb
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: pull-secret
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-bwksb
    secret:
      defaultMode: 420
      secretName: default-token-bwksb

The base image is azul/zulu-openjdk:8

Update 2: Added DockerFile

FROM azul/zulu-openjdk:8

RUN apt update && apt dist-upgrade -y

WORKDIR /code
COPY ./build/libs/APP.jar ./
COPY ./docker/config config


CMD java -Xmx512m -Xms128m -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -jar APP.jar
urpalreloaded
  • 468
  • 3
  • 13

0 Answers0