0

I have a play framework based java application deployed in kubernetes. One of the pods died due to out of memory/memory leak. In local , can use some utilities and monitor jvm heap usage. I am new to kubernetes.

Appreciate if you tell how to check for heap usage history of my application in a Kubernetes pod which got killed. kubectl get events on this killed pod will give events history but I want to check object wise heap usage history on that dead pod. Thanks much

ApprenticeWST
  • 117
  • 1
  • 7
  • I am also learning K8s starting this week, so throwing out this idea. If OOM issue is recurring, then how about saving GC log of your app and heap dump(when it crashes due to OOM) to a persistent volume? In that way even the pod is gone, you'll still have them and you can do forensic analysis of them. Those two artifacts are absolutely necessary when you are troubleshooting OOMs. – suv3ndu Nov 20 '20 at 15:57

1 Answers1

2

You can install addons or external tools like Prometheus or metrics-server.

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.

You can define queries: For CPU percentage

avg((sum (rate (container_cpu_usage_seconds_total {container_name!="" ,pod="<Pod name>" } [5m])) by (namespace , pod, container ) / on (container , pod , namespace) ((kube_pod_container_resource_limits_cpu_cores >0)*300))*100)

For Memory percentage

avg((avg (container_memory_working_set_bytes{pod="<pod name>"}) by (container_name , pod ))/ on (container_name , pod)(avg (container_spec_memory_limit_bytes>0 ) by (container_name, pod))*100)

Take a look: prometheus-pod-memory-usage.

You can visualize such metrics using Grafana - take a look how to set it up with Prometheus - grafana-prometheus-setup.

Metrics-server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.

Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to debug autoscaling pipelines.

You can execute:

$ kubectl top pod <your-pod-name> --namespace=your-namespace --containers

The following command will give you both the CPU usage as well as the memory usage for a given pod and its containers.

See how to firstly install metrics-server: metrics-server-installtion.

Otherwise if you want to check cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.

  1. Go to shell of running container kubectl exec pod_name -- /bin/bash
  2. Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
  3. Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes

Remember that memory usage is in bytes.

Take a look: memory-usage-kubernetes.

Malgorzata
  • 6,409
  • 1
  • 10
  • 27
  • thanks. Which of the above can give a object wise heap usage please? – ApprenticeWST Nov 20 '20 at 11:27
  • It depends, installing prometheus with Grafana will give you more wide picture - with diagrams (with time dependency), which are realy useful. Setting up just metrics-server and executing command kubectl top, will give you just simple output. However if you are working on GCP, you can monitor memory/cpu usage in the dashboard. Also you can find here some useful solutions how to reduce jvm memory usage - https://addshore.com/2020/05/reducing-java-jvm-memory-usage-in-containers-and-on-kubernetes/ . Also check links I have attached in my answer. – Malgorzata Nov 20 '20 at 15:07