I am running K8S cluster on-premise (Nothing in the cloud) with one K8S Master and two worker nodes.
- k8s-master : 192.168.100.100
- worker-node-1 : 192.168.100.101
- worker-node-2 : 192.168.100.102
I used kubernetes / ingress-nginx for routing traffic to my simple App. These are my pods running on both workers nodes:
[root@k8s-master ingress]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default hello-685445b9db-b7nql 1/1 Running 0 44m 10.5.2.7 worker-node-2 <none> <none>
default hello-685445b9db-ckndn 1/1 Running 0 44m 10.5.2.6 worker-node-2 <none> <none>
default hello-685445b9db-vd6h2 1/1 Running 0 44m 10.5.1.18 worker-node-1 <none> <none>
default ingress-nginx-controller-56c75d774d-p7whv 1/1 Running 1 30h 10.5.1.14 worker-node-1 <none> <none>
kube-system coredns-74ff55c5b-s8zss 1/1 Running 12 16d 10.5.0.27 k8s-master <none> <none>
kube-system coredns-74ff55c5b-w6rsh 1/1 Running 12 16d 10.5.0.26 k8s-master <none> <none>
kube-system etcd-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-apiserver-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-controller-manager-k8s-master 1/1 Running 14 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-flannel-ds-76mt8 1/1 Running 1 30h 192.168.100.102 worker-node-2 <none> <none>
kube-system kube-flannel-ds-bfnjw 1/1 Running 10 16d 192.168.100.101 worker-node-1 <none> <none>
kube-system kube-flannel-ds-krgzg 1/1 Running 13 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-proxy-6bq6n 1/1 Running 1 30h 192.168.100.102 worker-node-2 <none> <none>
kube-system kube-proxy-df8fn 1/1 Running 13 16d 192.168.100.100 k8s-master <none> <none>
kube-system kube-proxy-z8q2z 1/1 Running 10 16d 192.168.100.101 worker-node-1 <none> <none>
kube-system kube-scheduler-k8s-master 1/1 Running 12 16d 192.168.100.100 k8s-master <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-799cd98cf6-zh8xs 1/1 Running 9 16d 192.168.100.101 worker-node-1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-hvxgm 1/1 Running 10 16d 10.5.1.17 worker-node-1 <none> <none>
And these are the services running on my cluster:
[root@k8s-master ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello NodePort 10.105.236.241 <none> 80:31999/TCP 30h
ingress-nginx-controller NodePort 10.110.141.41 <none> 80:30428/TCP,443:32682/TCP 30h
ingress-nginx-controller-admission ClusterIP 10.109.15.31 <none> 443/TCP 30h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
And this is the ingress description:
[root@k8s-master ingress]# kubectl describe ingress ingress-hello
Name: ingress-hello
Namespace: default
Address: 10.110.141.41
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/hello hello:80 (10.5.1.18:80,10.5.2.6:80,10.5.2.7:80)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
The issue is when accessing the first node by visiting worker-node-1 IP Address with Ingress Controller Port = 30428, http://192.168.100.101:30428, its working fine with no problems. While accessing worker-node-2 by visiting IP with same ingress port 30428, its NOT RESPONDING from out side the node and also from inside the node too by accessing the URL: http://192.168.100.102:30428 . I also tried executing telnet command (inside the worker node 2), no luck also:
[root@worker-node-2 ~]# telnet 192.168.100.102 30428
Trying 192.168.100.102...
The most interesting thing is the port is shows up in netstat command, as I am executing this command from inside the Node-2 , showing ingress Port:30428 is in LISTEN state:
[root@worker-node-2 ~]# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 1284/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:32682 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN 1856/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1020/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1016/cupsd
tcp 0 0 127.0.0.1:41561 0.0.0.0:* LISTEN 1284/kubelet
tcp 0 0 0.0.0.0:30428 0.0.0.0:* LISTEN 2578/kube-proxy
tcp 0 0 0.0.0.0:31999 0.0.0.0:* LISTEN 2578/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 1284/kubelet
tcp6 0 0 :::111 :::* LISTEN 1/systemd
tcp6 0 0 :::10256 :::* LISTEN 2578/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 1020/sshd
tcp6 0 0 ::1:631 :::* LISTEN 1016/cupsd
udp 0 0 0.0.0.0:5353 0.0.0.0:* 929/avahi-daemon: r
udp 0 0 0.0.0.0:44997 0.0.0.0:* 929/avahi-daemon: r
udp 0 0 192.168.122.1:53 0.0.0.0:* 1856/dnsmasq
udp 0 0 0.0.0.0:67 0.0.0.0:* 1856/dnsmasq
udp 0 0 0.0.0.0:111 0.0.0.0:* 1/systemd
based on my understanding , all worker node must expose NodePort for "ingress controller" port which=30428??
Edited: I found that "ingress-nginx-controller-56c75d774d-p7whv" is deployed only on node-1. Do I need to make sure that the ingress-nginx controller is running on all nodes? how to achieve that if this statement is true?