4

I just started creating my own Kubernetes cluster with a few Raspberry pi devices. I'm using the guide from Alex Ellis. But I'm having the issue where my NodePort only works from the pods that are actually running the container. So there's no redirecting going on from pods that are not running the container.

Service & Deployment

apiVersion: v1
kind: Service
metadata:
  name: markdownrender
  labels:
    app: markdownrender
spec:
  type: NodePort
  externalTrafficPolicy: Cluster
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 31118
  selector:
    app: markdownrender
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: markdownrender
  labels:
   app: markdownrender
spec:
  replicas: 2
  selector:
    matchLabels:
      app: markdownrender
  template:
    metadata:
      labels:
        app: markdownrender
    spec:
      containers:
      - name: markdownrender
        image: functions/markdownrender:latest-armhf
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP

kubectl get services

kubernetes       ClusterIP   10.96.0.1     <none>        443/TCP          111m
markdownrender   NodePort    10.104.5.83   <none>        8080:31118/TCP   102m

kubectl get deployments

markdownrender   2/2     2            2           101m

kubectl get pods -o wide

markdownrender-f9744b577-pcplc   1/1     Running   1          90m   10.244.1.2   kube-node233   <none>           <none>
markdownrender-f9744b577-x4j4k   1/1     Running   1          90m   10.244.3.2   kube-node232   <none>           <none>

curl http://127.0.0.1:31118 -d "# test" --max-time 1 on nodes different from hosts kube-node233 and kube-node232, always returns Connection timed.

sudo iptables-save (on 230 master-node)

# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:19 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:05:19 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:19 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:05:20 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:05:20 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:05:20 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

sudo iptables-save (node 231 who has no container running)

# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:08:01 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:08:01 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

sudo iptables-save (node 232 who's pod runs the container)

# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SEP-TPAZEM2ZI6GIP4H4 - [0:0]
:KUBE-SVC-QXMBXH4RFEQTDMUZ - [0:0]
:KUBE-SEP-7S77XOJGOAF6ON4P - [0:0]
:KUBE-SEP-GE6BLW5CUF74UDN2 - [0:0]
:KUBE-SEP-IRMT6RY5EEEBXDAY - [0:0]
:KUBE-SEP-232DQYSHL5HNRYWJ - [0:0]
:KUBE-SEP-2Z3537XSN3RJRU3M - [0:0]
:KUBE-SEP-A4UL7OUXQPUR7Y7Q - [0:0]
:KUBE-SEP-275NWNNANOEIGYHG - [0:0]
:KUBE-SEP-CPH3WXMLRJ2BZFXW - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.104.5.83/32 -p tcp -m comment --comment "default/markdownrender: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/markdownrender:" -m tcp --dport 31118 -j KUBE-SVC-QXMBXH4RFEQTDMUZ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IRMT6RY5EEEBXDAY
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2Z3537XSN3RJRU3M
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-A4UL7OUXQPUR7Y7Q
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-275NWNNANOEIGYHG
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-CPH3WXMLRJ2BZFXW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-TPAZEM2ZI6GIP4H4
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -s 192.168.2.230/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TPAZEM2ZI6GIP4H4 -p tcp -m tcp -j DNAT --to-destination 192.168.2.230:6443
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-7S77XOJGOAF6ON4P
-A KUBE-SVC-QXMBXH4RFEQTDMUZ -j KUBE-SEP-GE6BLW5CUF74UDN2
-A KUBE-SEP-7S77XOJGOAF6ON4P -s 10.244.1.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-7S77XOJGOAF6ON4P -p tcp -m tcp -j DNAT --to-destination 10.244.1.3:8080
-A KUBE-SEP-GE6BLW5CUF74UDN2 -s 10.244.3.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-GE6BLW5CUF74UDN2 -p tcp -m tcp -j DNAT --to-destination 10.244.3.3:8080
-A KUBE-SEP-IRMT6RY5EEEBXDAY -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-IRMT6RY5EEEBXDAY -p udp -m udp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-2Z3537XSN3RJRU3M -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2Z3537XSN3RJRU3M -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:53
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-A4UL7OUXQPUR7Y7Q -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-275NWNNANOEIGYHG -s 10.244.0.6/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-275NWNNANOEIGYHG -p tcp -m tcp -j DNAT --to-destination 10.244.0.6:9153
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -s 10.244.0.7/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CPH3WXMLRJ2BZFXW -p tcp -m tcp -j DNAT --to-destination 10.244.0.7:9153
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-FORWARD - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m mark --mark 0x8000/0x8000 -m comment --comment "kubernetes firewall for dropping marked packets" -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.244.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Generated by xtables-save v1.8.2 on Sun Jan 19 16:11:44 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Sun Jan 19 16:11:44 2020
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them

I also checked "Nodeport only works on Pod Host" and "NodePort only responding on node where pod is running" but still no success.

Kevin Etore
  • 1,046
  • 3
  • 14
  • 32
  • the iptables output is for which node? – Arghya Sadhu Jan 19 '20 at 15:51
  • @ArghyaSadhu I've updated my question with more details. 230 is my master node, 231 runs **no** pod, 232 runs a pod with the container. – Kevin Etore Jan 19 '20 at 16:16
  • Have you tried `iptables -A FORWARD -j ACCEPT` from this [answer](https://stackoverflow.com/questions/46667659) – EngSabry Jan 23 '20 at 07:31
  • @EngSabry I tried the command on all my nodes, the same result. – Kevin Etore Jan 23 '20 at 19:39
  • @KevinEtore Can you check if its not a firewall issue? Just use netcat to check if you can connect to TCP ports of node232 from node231. – Manish Dash Feb 03 '20 at 11:13
  • @KevinEtore any progress with this? I think I'm facing the same issue. See https://github.com/kubernetes/kubernetes/issues/89831 and https://github.com/kubernetes/kubernetes/issues/89632 – whites11 Apr 04 '20 at 09:20
  • @whites11 I did not resolve the issue. Haven't looked at the cluster for a while. But please do let me know if you found a solution! :) – Kevin Etore Apr 09 '20 at 13:33
  • @KevinEtore I solved my issue, but it was my error and definitely not your case, sorry I can't help – whites11 Apr 09 '20 at 14:33

2 Answers2

0

If you’re running on a cloudprovider, you may need to open up a firewall-rule for the nodes:nodeport listed in your post

If the problem still remains there could be an issue with the pod networking. It would be difficult to identify the root cause unless we access the cluster. Though the below posts might be helpful.

https://github.com/kubernetes/kubernetes/issues/58908 https://github.com/kubernetes/kubernetes/issues/70222

P Ekambaram
  • 15,499
  • 7
  • 34
  • 59
-1

A better approach to this would be using ingress and not the iptables route. The reason primarily being that you'll lose the configuration on node restart/drains.
The best and easily maintained would be nginx ingress. When you define the ingress, simply put in the hostPort which is the port you want to run it physcially on and map it with containerPort which will actually be the port of the container the service is running on (8080). Since it runs as a deamonset, it will take care of caching requests as well as act as a load balancer between the nodes by default.

Anshul Verma
  • 1,065
  • 1
  • 9
  • 26
  • This does not answer the question. – Shashank V Jan 19 '20 at 13:50
  • I agree that the use of an `ingress` would be an overall better solution. But I would still like my current question fixed before I move to an `ingress` service. – Kevin Etore Jan 19 '20 at 14:24
  • @KevinEtore Try running `iptables -P FORWARD ACCEPT` (https://stackoverflow.com/questions/48442485/nodeport-services-not-available-on-all-nodes) on all the nodes and the service should then forward on each node. @ShashankV Let's be more open minded and try and help community adapt better methods from our experience, irrespective of whether it is the answer or not – Anshul Verma Jan 19 '20 at 18:14
  • @AnshulVerma Thanks for your response, I tried that a few hours ago, didn't help. – Kevin Etore Jan 19 '20 at 18:24
  • Can you verify the CNI and coredns pod status? is it keep restarting or fine ? Can you provide this command output " kubectl get pods -n kube-system -o wide " – Subramanian Manickam Feb 03 '20 at 02:52