2

I am trying to figure out why Metric server isn't collecting stats from the node where it is deployed (r2s13). There are 3 nodes in my cluster (1 master and 2 workers).

  • metric server version: 0.3.1

  • kubernetes version: 1.12 (installed with kubeadm)

  • CNI plugin: weave net

kubectl top node output:

NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%

r2s12 344m 4% 3079Mi 12%

r2s14 67m 0% 1695Mi 21%

r2s13 

In metric server log, I have the below line repeated (just for the node where the metric server is deployed r2s13):

E1023 15:28:14.643011 1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:r2s13: unable to fetch metrics from Kubelet r2s13 (10.199.183.218): Get https://10.199.183.218:10250/stats/summary/: dial tcp 10.199.183.218:10250: i/o timeout

I can't ping from the pod to the node where it is deployed.

I have added below config in metric server:

    command:
    - /metrics-server
    - --kubelet-insecure-tls
    - --kubelet-preferred-address-types=InternalIP
Emruz Hossain
  • 4,764
  • 18
  • 26
Hanna
  • 31
  • 1
  • 4
  • Have you tried to fix /etc/resolv.conf as @Rico suggested in https://stackoverflow.com/a/52230952/9929015 ? – Vit Nov 19 '18 at 15:29
  • Yes the resolve.conf on each node contains all the nodes. and the ping between the nodes is OK; while I can't ping from inside the metric pod to the IP of its hosting node. – Hanna Nov 20 '18 at 13:25
  • @Hanna do you solved the problem ? I have a similar problem now and any news about this will help me. – Yonsy Solis Oct 21 '19 at 19:50
  • Exactly the same problem here (and also using the same extra arguments). Has anyone already found a solution? – Quintesse Apr 27 '20 at 22:01

1 Answers1

0

In my case it was because the firewall wouldn't allow incoming traffic from Weave.

Executing the following fixed the problem

ufw allow in on weave
ufw reload
Quintesse
  • 452
  • 2
  • 9