In a Kubernetes
cluster with numerous microservices, one of them is used exclusively for a Java Virtual Machine
(JVM) that runs a Java 1.8
data processing application.
Up to recently, jobs running in that JVM pod consumed less than 1 GB of RAM, so the pod has been setup with 4 GB of maximum memory, without any explicit heap size settings for the JVM.
Some new data now require about 2.5 GB for the entire pod, including the JVM (as reported by the kubernetes top
command, after launching with an increased memory limit of 8 GB), but the pod crashes soon after starting with a limit of 4 GB.
Using a head size range like -Xms256m -Xmx3072m
with a limit of 4 GB does not solve the problem. In fact, now the pod does not even start.
Is there any way to parameterize the JVM for accommodating the 2.5 GB needed, without increasing the 4 GB maximum memory for the pod?