6

I encounter a very strange kubernetes network issue with kubeadm installation with flannel. Could you please help?

I have 3 nodes, 1 for master, 2 for minion nodes.and has 4 pods are running.

list all nodes(added a column # to simplify description)

[root@snap460c04 ~]# kubectl get nodes
# NAME         STATUS         AGE
1 snap460c03   Ready          11h
2 snap460c04   Ready,master   11h
3 snap460c06   Ready          11h

List all PODs (added a column # to simplify description)

[root@snap460c04 ~]# kubectl get pods -o wide -n eium1  
# NAME                    READY     STATUS    RESTARTS   AGE       IP               NODE        Node#
1 demo-1229769353-7gf70   1/1       Running   0          10h       192.168.2.4      snap460c03    1
2 demo-1229769353-93xwm   1/1       Running   0          10h       192.168.1.4      snap460c06    3
3 demo-1229769353-kxzs9   1/1       Running   0          10h       192.168.1.5      snap460c06    3
4 demo-1229769353-ljvtg   1/1       Running   0          10h       192.168.2.3      snap460c03    1

I did 2 test, one for nodes->pods, another one for pods->pods.

In the nodes->pods test, the result is:

Test 1: Node => POD Test

From Node #1 (c03) => Why can ping local node pods only?

Ping POD #1: OK (ping 192.168.2.4)
Ping POD #2: NOK (ping 192.168.1.4)
Ping POD #3: NOK (ping 192.168.1.5)
Ping POD #4: OK (ping 192.168.2.3)

From Node #2 (c04) => All pods are remote, why cannot ping pods on node #3?

Ping POD #1: OK (ping 192.168.2.4)
Ping POD #2: NOK (ping 192.168.1.4)
Ping POD #3: NOK (ping 192.168.1.5)
Ping POD #4: OK (ping 192.168.2.3)

From Node #3 (c06) => It is an expected result

Ping POD #1: OK (ping 192.168.2.4)
Ping POD #2: OK (ping 192.168.1.4)
Ping POD #3: OK (ping 192.168.1.5)
Ping POD #4: OK (ping 192.168.2.3)

Test 2: POD=>POD => Why pod can ping local node pods only?

From POD #1 @ Node#1

Ping POD #1: OK  (kubectl -n eium1 exec demo-1229769353-7gf70   ping 192.168.2.4)
Ping POD #2: NOK (kubectl -n eium1 exec demo-1229769353-7gf70   ping 192.168.1.4)
Ping POD #3: NOK (kubectl -n eium1 exec demo-1229769353-7gf70   ping 192.168.1.5)
Ping POD #4: OK  (kubectl -n eium1 exec demo-1229769353-7gf70   ping 192.168.2.3)

From POD #2 @ Node#3

Ping POD #1: NOK (kubectl -n eium1 exec demo-1229769353-93xwm   ping 192.168.2.4)
Ping POD #2: OK  (kubectl -n eium1 exec demo-1229769353-93xwm   ping 192.168.1.4)
Ping POD #3: OK  (kubectl -n eium1 exec demo-1229769353-93xwm   ping 192.168.1.5)
Ping POD #4: NOK (kubectl -n eium1 exec demo-1229769353-93xwm   ping 192.168.2.3)

From POD #3 @ Node#3

Ping POD #1: NOK (kubectl -n eium1 exec demo-1229769353-kxzs9   ping 192.168.2.4)
Ping POD #2: OK  (kubectl -n eium1 exec demo-1229769353-kxzs9    ping 192.168.1.4)
Ping POD #3: OK  (kubectl -n eium1 exec demo-1229769353-kxzs9   ping 192.168.1.5)
Ping POD #4: NOK (kubectl -n eium1 exec demo-1229769353-kxzs9   ping 192.168.2.3)

From POD #4 @ Node#1

Ping POD #1: OK  (kubectl -n eium1 exec demo-1229769353-ljvtg   ping 192.168.2.4)
Ping POD #2: NOK (kubectl -n eium1 exec demo-1229769353-ljvtg   ping 192.168.1.4)
Ping POD #3: NOK (kubectl -n eium1 exec demo-1229769353-ljvtg   ping 192.168.1.5)
Ping POD #4: OK  (kubectl -n eium1 exec demo-1229769353-ljvtg   ping 192.168.2.3)

Env information

K8s version

[root@snap460c04 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:57:05Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.0", GitCommit:"58b7c16a52c03e4a849874602be42ee71afdcab1", GitTreeState:"clean", BuildDate:"2016-12-12T23:31:15Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

flannel pods

[root@snap460c04 ~]# kubectl get pods -o wide 
NAME                    READY     STATUS    RESTARTS   AGE       IP               NODE
kube-flannel-ds-03w6l   2/2       Running   0          11h       15.114.116.126   snap460c04
kube-flannel-ds-fdgdh   2/2       Running   0          11h       15.114.116.125   snap460c03
kube-flannel-ds-xnzx3   2/2       Running   0          11h       15.114.116.128   snap460c06

System PODS

[root@snap460c04 ~]# kubectl get pods -o wide -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE       IP               NODE
dummy-2088944543-kcj44                  1/1       Running   0          11h       15.114.116.126   snap460c04
etcd-snap460c04                         1/1       Running   19         11h       15.114.116.126   snap460c04
kube-apiserver-snap460c04               1/1       Running   0          11h       15.114.116.126   snap460c04
kube-controller-manager-snap460c04      1/1       Running   0          11h       15.114.116.126   snap460c04
kube-discovery-1769846148-5x4gr         1/1       Running   0          11h       15.114.116.126   snap460c04
kube-dns-2924299975-9tdl9               4/4       Running   0          11h       192.168.0.2      snap460c04
kube-proxy-7wtr4                        1/1       Running   0          11h       15.114.116.128   snap460c06
kube-proxy-j0h4g                        1/1       Running   0          11h       15.114.116.126   snap460c04
kube-proxy-knbrl                        1/1       Running   0          11h       15.114.116.125   snap460c03
kube-scheduler-snap460c04               1/1       Running   0          11h       15.114.116.126   snap460c04
kubernetes-dashboard-3203831700-1nw59   1/1       Running   0          10h       192.168.0.4      snap460c04

The flannel is installed with the guide:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Network information for node 1 (c03)

[root@snap460c03 ~]# iptables-save
# Generated by iptables-save v1.4.21 on Wed Feb 22 18:01:12 2017
*nat
:PREROUTING ACCEPT [1:78]
:INPUT ACCEPT [1:78]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-2MEKZI7PJUEHR67T - [0:0]
:KUBE-SEP-3OT3I6HGM4K7SHGI - [0:0]
:KUBE-SEP-6TVSO4B75FMUOZPV - [0:0]
:KUBE-SEP-6YIOEPRBG6LZYDNQ - [0:0]
:KUBE-SEP-A6J4YW3AMR2ZVZMA - [0:0]
:KUBE-SEP-DBP5C3QJN36XNYPX - [0:0]
:KUBE-SEP-ES7Q53Y6P2YLIO4O - [0:0]
:KUBE-SEP-FWJIKOY3NRVP7HUX - [0:0]
:KUBE-SEP-JTN4UBVS7OG5RONX - [0:0]
:KUBE-SEP-PNOYUP2SIIHRG34N - [0:0]
:KUBE-SEP-UPZX2EM3TRFH2ASL - [0:0]
:KUBE-SEP-X7MGMJMV5H5T4NJN - [0:0]
:KUBE-SEP-ZZLC6ELJT43VDXYQ - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-5J5TVDDOSFKU7A7D - [0:0]
:KUBE-SVC-5RKFNKIUXDFI3AVK - [0:0]
:KUBE-SVC-EP4VGANCYXDST444 - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-KOBH2JYY2L2SF2XK - [0:0]
:KUBE-SVC-NGBEVGRJNPASKNGR - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-OL65KRZ5QEUS2RPN - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.87/32 -d 172.17.0.87/32 -p tcp -m tcp --dport 8158 -j MASQUERADE
-A POSTROUTING -s 172.17.0.87/32 -d 172.17.0.87/32 -p tcp -m tcp --dport 8159 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:db" -m tcp --dport 30156 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:db" -m tcp --dport 30156 -j KUBE-SVC-5RKFNKIUXDFI3AVK
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 32180 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 32180 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:repo" -m tcp --dport 30157 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:repo" -m tcp --dport 30157 -j KUBE-SVC-OL65KRZ5QEUS2RPN
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:ior" -m tcp --dport 30158 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:ior" -m tcp --dport 30158 -j KUBE-SVC-KOBH2JYY2L2SF2XK
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:web" -m tcp --dport 30159 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:web" -m tcp --dport 30159 -j KUBE-SVC-NGBEVGRJNPASKNGR
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:vnc" -m tcp --dport 30160 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "eium1/ems:vnc" -m tcp --dport 30160 -j KUBE-SVC-5J5TVDDOSFKU7A7D
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2MEKZI7PJUEHR67T -s 192.168.0.4/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ
-A KUBE-SEP-2MEKZI7PJUEHR67T -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination 192.168.0.4:9090
-A KUBE-SEP-3OT3I6HGM4K7SHGI -s 192.168.1.5/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ
-A KUBE-SEP-3OT3I6HGM4K7SHGI -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.1.5:9901
-A KUBE-SEP-6TVSO4B75FMUOZPV -s 15.114.116.128/32 -m comment --comment "eium1/ems:db" -j KUBE-MARK-MASQ
-A KUBE-SEP-6TVSO4B75FMUOZPV -p tcp -m comment --comment "eium1/ems:db" -m tcp -j DNAT --to-destination 15.114.116.128:3306
-A KUBE-SEP-6YIOEPRBG6LZYDNQ -s 15.114.116.128/32 -m comment --comment "eium1/ems:vnc" -j KUBE-MARK-MASQ
-A KUBE-SEP-6YIOEPRBG6LZYDNQ -p tcp -m comment --comment "eium1/ems:vnc" -m tcp -j DNAT --to-destination 15.114.116.128:5911
-A KUBE-SEP-A6J4YW3AMR2ZVZMA -s 192.168.1.4/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ
-A KUBE-SEP-A6J4YW3AMR2ZVZMA -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.1.4:9901
-A KUBE-SEP-DBP5C3QJN36XNYPX -s 15.114.116.128/32 -m comment --comment "eium1/ems:ior" -j KUBE-MARK-MASQ
-A KUBE-SEP-DBP5C3QJN36XNYPX -p tcp -m comment --comment "eium1/ems:ior" -m tcp -j DNAT --to-destination 15.114.116.128:8158
-A KUBE-SEP-ES7Q53Y6P2YLIO4O -s 15.114.116.128/32 -m comment --comment "eium1/ems:web" -j KUBE-MARK-MASQ
-A KUBE-SEP-ES7Q53Y6P2YLIO4O -p tcp -m comment --comment "eium1/ems:web" -m tcp -j DNAT --to-destination 15.114.116.128:8159
-A KUBE-SEP-FWJIKOY3NRVP7HUX -s 15.114.116.126/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-FWJIKOY3NRVP7HUX -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-FWJIKOY3NRVP7HUX --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 15.114.116.126:6443
-A KUBE-SEP-JTN4UBVS7OG5RONX -s 192.168.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-JTN4UBVS7OG5RONX -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 192.168.0.2:53
-A KUBE-SEP-PNOYUP2SIIHRG34N -s 192.168.2.4/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ
-A KUBE-SEP-PNOYUP2SIIHRG34N -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.2.4:9901
-A KUBE-SEP-UPZX2EM3TRFH2ASL -s 192.168.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-UPZX2EM3TRFH2ASL -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 192.168.0.2:53
-A KUBE-SEP-X7MGMJMV5H5T4NJN -s 192.168.2.3/32 -m comment --comment "eium1/demo:ro" -j KUBE-MARK-MASQ
-A KUBE-SEP-X7MGMJMV5H5T4NJN -p tcp -m comment --comment "eium1/demo:ro" -m tcp -j DNAT --to-destination 192.168.2.3:9901
-A KUBE-SEP-ZZLC6ELJT43VDXYQ -s 15.114.116.128/32 -m comment --comment "eium1/ems:repo" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZZLC6ELJT43VDXYQ -p tcp -m comment --comment "eium1/ems:repo" -m tcp -j DNAT --to-destination 15.114.116.128:8300
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:db cluster IP" -m tcp --dport 3306 -j KUBE-SVC-5RKFNKIUXDFI3AVK
-A KUBE-SERVICES -d 10.102.162.2/32 -p tcp -m comment --comment "eium1/demo:ro cluster IP" -m tcp --dport 9901 -j KUBE-SVC-EP4VGANCYXDST444
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.108.36.183/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 80 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:repo cluster IP" -m tcp --dport 8300 -j KUBE-SVC-OL65KRZ5QEUS2RPN
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:ior cluster IP" -m tcp --dport 8158 -j KUBE-SVC-KOBH2JYY2L2SF2XK
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:web cluster IP" -m tcp --dport 8159 -j KUBE-SVC-NGBEVGRJNPASKNGR
-A KUBE-SERVICES -d 10.110.146.207/32 -p tcp -m comment --comment "eium1/ems:vnc cluster IP" -m tcp --dport 5911 -j KUBE-SVC-5J5TVDDOSFKU7A7D
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-5J5TVDDOSFKU7A7D -m comment --comment "eium1/ems:vnc" -j KUBE-SEP-6YIOEPRBG6LZYDNQ
-A KUBE-SVC-5RKFNKIUXDFI3AVK -m comment --comment "eium1/ems:db" -j KUBE-SEP-6TVSO4B75FMUOZPV
-A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-A6J4YW3AMR2ZVZMA
-A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-3OT3I6HGM4K7SHGI
-A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X7MGMJMV5H5T4NJN
-A KUBE-SVC-EP4VGANCYXDST444 -m comment --comment "eium1/demo:ro" -j KUBE-SEP-PNOYUP2SIIHRG34N
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-UPZX2EM3TRFH2ASL
-A KUBE-SVC-KOBH2JYY2L2SF2XK -m comment --comment "eium1/ems:ior" -j KUBE-SEP-DBP5C3QJN36XNYPX
-A KUBE-SVC-NGBEVGRJNPASKNGR -m comment --comment "eium1/ems:web" -j KUBE-SEP-ES7Q53Y6P2YLIO4O
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-FWJIKOY3NRVP7HUX --mask 255.255.255.255 --rsource -j KUBE-SEP-FWJIKOY3NRVP7HUX
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-FWJIKOY3NRVP7HUX
-A KUBE-SVC-OL65KRZ5QEUS2RPN -m comment --comment "eium1/ems:repo" -j KUBE-SEP-ZZLC6ELJT43VDXYQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-JTN4UBVS7OG5RONX
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-2MEKZI7PJUEHR67T
COMMIT
# Completed on Wed Feb 22 18:01:12 2017
# Generated by iptables-save v1.4.21 on Wed Feb 22 18:01:12 2017
*filter
:INPUT ACCEPT [147:180978]
:FORWARD ACCEPT [16:1344]
:OUTPUT ACCEPT [20:11774]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
COMMIT
# Completed on Wed Feb 22 18:01:12 2017

[root@snap460c03 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 98:4b:e1:62:14:00 brd ff:ff:ff:ff:ff:ff
    inet 15.114.116.125/22 brd 15.114.119.255 scope global enp2s0f0
       valid_lft forever preferred_lft forever
    inet6 2002:109d:45fd:b:9a4b:e1ff:fe62:1400/64 scope global dynamic 
       valid_lft 6703sec preferred_lft 1303sec
    inet6 fec0::b:9a4b:e1ff:fe62:1400/64 scope site dynamic 
       valid_lft 6703sec preferred_lft 1303sec
    inet6 fe80::9a4b:e1ff:fe62:1400/64 scope link 
       valid_lft forever preferred_lft forever
3: enp2s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 98:4b:e1:62:14:04 brd ff:ff:ff:ff:ff:ff
4: enp2s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 98:4b:e1:62:14:01 brd ff:ff:ff:ff:ff:ff
5: enp2s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 98:4b:e1:62:14:05 brd ff:ff:ff:ff:ff:ff
6: enp2s0f4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 98:4b:e1:62:14:02 brd ff:ff:ff:ff:ff:ff
7: enp2s0f5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 98:4b:e1:62:14:06 brd ff:ff:ff:ff:ff:ff
8: enp2s0f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 98:4b:e1:62:14:03 brd ff:ff:ff:ff:ff:ff
9: enp2s0f7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 98:4b:e1:62:14:07 brd ff:ff:ff:ff:ff:ff
10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::5484:7aff:fefe:9799/64 scope link 
       valid_lft forever preferred_lft forever
1822: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP 
    link/ether 0a:58:c0:a8:02:01 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::858:c0ff:fea8:201/64 scope link 
       valid_lft forever preferred_lft forever
1824: veth6c162dff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP 
    link/ether 36:2b:f9:cf:1d:aa brd ff:ff:ff:ff:ff:ff
    inet6 fe80::342b:f9ff:fecf:1daa/64 scope link 
       valid_lft forever preferred_lft forever
1825: veth34ca824a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP 
    link/ether ae:79:85:50:0b:da brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ac79:85ff:fe50:bda/64 scope link 
       valid_lft forever preferred_lft forever
916: vethab43ed7: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether da:5a:e9:f2:6b:0a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d85a:e9ff:fef2:6b0a/64 scope link 
       valid_lft forever preferred_lft forever
918: veth1bbb133: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP 
    link/ether 56:3c:47:e1:5a:c0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::543c:47ff:fee1:5ac0/64 scope link 
       valid_lft forever preferred_lft forever
921: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 8e:8a:81:67:a6:92 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::8c8a:81ff:fe67:a692/64 scope link 
       valid_lft forever preferred_lft forever
[root@snap460c03 ~]# 
[root@snap460c03 ~]# ip -s link 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    RX: bytes  packets  errors  dropped overrun mcast   
    1652461277938 1256303971 0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    1652461277938 1256303971 0       0       0       0      
2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
    link/ether 98:4b:e1:62:14:00 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    273943938058 464934981 0       4475811 0       4994783
    TX: bytes  packets  errors  dropped carrier collsns 
    112001303439 313492490 0       0       0       0      
3: enp2s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
    link/ether 98:4b:e1:62:14:04 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0      
4: enp2s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
    link/ether 98:4b:e1:62:14:01 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0      
5: enp2s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
    link/ether 98:4b:e1:62:14:05 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0      
6: enp2s0f4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
    link/ether 98:4b:e1:62:14:02 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0      
7: enp2s0f5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
    link/ether 98:4b:e1:62:14:06 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0      
8: enp2s0f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
    link/ether 98:4b:e1:62:14:03 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0      
9: enp2s0f7: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DEFAULT qlen 1000
    link/ether 98:4b:e1:62:14:07 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    0          0        0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    0          0        0       0       0       0      
10: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    326660431  4780762  0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    3574619827 5529921  0       0       0       0      
1822: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT 
    link/ether 0a:58:c0:a8:02:01 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    12473828   150176   0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    116444     2577     0       0       0       0      
1824: veth6c162dff: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT 
    link/ether 36:2b:f9:cf:1d:aa brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    14089148   145722   0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    7131026    74713    0       0       0       0      
1825: veth34ca824a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT 
    link/ether ae:79:85:50:0b:da brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    14647882   151667   0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    7149198    75141    0       0       0       0      
916: vethab43ed7: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT 
    link/ether da:5a:e9:f2:6b:0a brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    66752218   734347   0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    43439443   733394   0       0       0       0      
918: veth1bbb133: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT 
    link/ether 56:3c:47:e1:5a:c0 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    66755200   734343   0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    43434663   733264   0       0       0       0      
921: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT 
    link/ether 8e:8a:81:67:a6:92 brd ff:ff:ff:ff:ff:ff
    RX: bytes  packets  errors  dropped overrun mcast   
    82703829917 59818766 0       0       0       0      
    TX: bytes  packets  errors  dropped carrier collsns 
    940475554  9717823  0       898     0       0      
[root@snap460c03 ~]# 
[root@snap460c03 ~]# ip route
default via 15.114.116.1 dev enp2s0f0  proto static  metric 100 
15.114.116.0/22 dev enp2s0f0  proto kernel  scope link  src 15.114.116.125  metric 100 
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.42.1 
192.168.0.0/16 dev flannel.1 
192.168.2.0/24 dev cni0  proto kernel  scope link  src 192.168.2.1 
Bo Wang
  • 499
  • 1
  • 8
  • 15

2 Answers2

0

What parameters did you provide for kubeadm?

If you want to use flannel as the pod network, specify --pod-network-cidr 10.244.0.0/16 if you’re using the daemonset manifest below. However, please note that this is not required for any other networks besides Flannel

Execute these commands on every node:

sysctl -w net.ipv4.ip_forward=1
sysctl -w net.bridge.bridge-nf-call-ip6tables=1
sysctl -w net.bridge.bridge-nf-call-iptables=1
Janos Lenart
  • 25,074
  • 5
  • 73
  • 75
  • Thanks Janos. I tried it, it still does not work. I use the following command to init the master node: kubeadm init --pod-network-cidr=192.168.0.0/16 . it is any problem? – Bo Wang Feb 22 '17 at 09:54
  • 1
    Yes, that can be a problem for 2 reasons: [kube-flannel.yml](https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml) is referencing 10.244.0.0/16 in a ConfigMap. You need to adjust that too. Also, 192.168.0.0 might be intersecting with your own LAN, but again, it might be okay.. – Janos Lenart Feb 22 '17 at 10:04
  • I tried update the flannl.yml with 192.168.0.0./16, and recreate kube-flannel-ds, then recreate all of my pods. but the result is same. 192.168.0.0 do not have any intersection with my LAN, you can see the previous network information output. could you please check my network information output above? – Bo Wang Feb 22 '17 at 10:46
0

I have similar issue were flanneld service is active but I can't ping pods located in another node. I found out that my firewall on the nodes are up (firewalld.service) so I turned them off and I was able to ping pods regardless on what node I am located. Hope this helps.

mit13
  • 129
  • 1
  • 9