4

I have a newly created AWS EKS 1.18 Cluster, applications are deployed on it, everything is fine, test and load test are successful, namely, my HPA and metrics-server is working fine.

But when I make a deployment to a service, metrics-server is giving unable to fetch pod metrics for pod xxx: no metrics known for pod for the newly deployed pod, then problem is being resolved and everything is fine again.

My question is, is this an expected behaviour for metrics-server? Or should I check my configs again?

Thank you very much.

Oguzhan Aygun
  • 1,314
  • 1
  • 10
  • 24
  • There is a [comment](https://github.com/kubernetes-sigs/metrics-server/issues/299#issuecomment-550030920) about that on github, `Metrics Server is expected to report "no metrics known for pod" until cache will be populated. Cache can be empty on freshly deployed metrics-server or can miss values for newly deployed pods`. So if I understand correctly it's working as expected. I assume this problem is being solved after 60s? By default metrics are scraped every 60s. – Jakub Nov 26 '20 at 08:40
  • Thank you very much @Jakub , yes it's being solved after a short amount of time. and I think so after your comment, it's working as expected – Oguzhan Aygun Nov 26 '20 at 10:22
  • Happy to help. I have posted an answer with these informations. If this answer or any other one solved your issue, please mark it as accepted or upvote it as per [stackoverflow rules](https://stackoverflow.com/help/someone-answers). – Jakub Dec 01 '20 at 08:31

1 Answers1

1

There is a comment about that on github:

Metrics Server is expected to report "no metrics known for pod" until cache will be populated. Cache can be empty on freshly deployed metrics-server or can miss values for newly deployed pods.

So if I understand correctly it's working as expected. I assume this problem is being solved after 60s as by default metrics are scraped every 60s.

Jakub
  • 8,189
  • 1
  • 17
  • 31