2

Following DNS troubleshooting instructions, service names resolve on the master node's pods, but not on slave node's pod. I have a 2-node kubeadm cluster setup on VirtualBox CentOS VM's with flannel.

from master:

kubectl exec -ti etcd-master -n kube-system -- nslookup kubernetes.default
Server:    192.168.1.1
Address 1: 192.168.1.1

Name:      kubernetes.default
Address 1: 92.242.140.21 unallocated.barefruit.co.uk

from slave:

 kubectl exec -ti busybox -- nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10

nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1

This issue is mentioned in a comment, by @P.J.Meisch, but no resolution since it wasn't the actual question.

The /etc/resolv.conf on each of the nodes (VM's) just has my host machine IP as the nameserver. is this wrong?

# Generated by NetworkManager
search fios-router.home
nameserver 192.168.1.1

Is flannel a bad choice for this set up?

anweb
  • 63
  • 8
  • Flannel is ok and has no influence on that issue. I suggest setting up a dedicated name server with debug enabled, and then provide here logs for further inspection. – d0bry May 11 '18 at 16:09
  • Thanks for flannel info @d0bry How do i set up a nameserver? Is this in kubernetes? – anweb May 11 '18 at 17:08
  • @d0bry can you advise on the way to make a nameserver for VMs and kubernetes? – anweb May 14 '18 at 20:51
  • do you have kubedns running? Please chck https://stackoverflow.com/questions/41655458/kubernetes-default-name-does-not-resolve if this helps – P.J.Meisch May 15 '18 at 04:56

0 Answers0