1

What I have is

  • Kubernetes: v.1.1.2
  • iptables v1.4.21
  • kernel: 3.10.0-327.3.1.el7.x86_64 Centos
  • Networking is done via flannel udp
  • no cloud provider

what I do

I have enabled it with --proxy_mode=iptables argument. And I checked the iptables

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
KUBE-SERVICES  all  --  anywhere             anywhere             /* kubernetes service portals */
DOCKER     all  --  anywhere            !loopback/8           ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  SIDR26KUBEAPMORANGE-005/26  anywhere
MASQUERADE  all  --  172.17.0.0/16        anywhere
MASQUERADE  all  --  anywhere             anywhere             /* kubernetes service traffic requiring SNAT */ mark match 0x4d415351

Chain DOCKER (2 references)
target     prot opt source               destination

Chain KUBE-NODEPORTS (1 references)
target     prot opt source               destination

Chain KUBE-SEP-3SX6E5663KCZDTLC (1 references)
target     prot opt source               destination
MARK       all  --  172.20.10.130        anywhere             /* default/nc-service: */ MARK set 0x4d415351
DNAT       tcp  --  anywhere             anywhere             /* default/nc-service: */ tcp to:172.20.10.130:9000

Chain KUBE-SEP-Q4LJF4YJE6VUB3Y2 (1 references)
target     prot opt source               destination
MARK       all  --  SIDR26KUBEAPMORANGE-001.serviceengage.com  anywhere             /* default/kubernetes: */ MARK set 0x4d415351
DNAT       tcp  --  anywhere             anywhere             /* default/kubernetes: */ tcp to:10.62.66.254:9443

Chain KUBE-SERVICES (2 references)
target     prot opt source               destination
KUBE-SVC-6N4SJQIF3IX3FORG  tcp  --  anywhere             172.21.0.1           /* default/kubernetes: cluster IP */ tcp dpt:https
KUBE-SVC-362XK5X6TGXLXGID  tcp  --  anywhere             172.21.145.28        /* default/nc-service: cluster IP */ tcp dpt:commplex-main
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-362XK5X6TGXLXGID (1 references)
target     prot opt source               destination
KUBE-SEP-3SX6E5663KCZDTLC  all  --  anywhere             anywhere             /* default/nc-service: */

Chain KUBE-SVC-6N4SJQIF3IX3FORG (1 references)
target     prot opt source               destination
KUBE-SEP-Q4LJF4YJE6VUB3Y2  all  --  anywhere             anywhere             /* default/kubernetes: */

When I do nc request to the service ip from another machine, in my case it's 10.116.0.2 I got an error like below nc -v 172.21.145.28 5000 Ncat: Version 6.40 ( http://nmap.org/ncat ) hello Ncat: Connection timed out.

while when I do request to the 172.20.10.130:9000 server it's working fine.

nc -v 172.20.10.130 9000 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 172.20.10.130:9000. hello yes

From the dmesg log, I can see

[10153.318195] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318282] DBG@OUTPUT: IN= OUT=eth0 SRC=10.62.66.223 DST=172.21.145.28 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=5000 WINDOW=29200 RES=0x00 SYN URGP=0
[10153.318374] DBG@POSTROUTING: IN= OUT=flannel0 SRC=10.62.66.223 DST=172.20.10.130 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=62466 DF PROTO=TCP SPT=59075 DPT=9000 WINDOW=29200 RES=0x00 SYN URGP=0

And I found if I'm on the machine which the Pod is running. I can successfully to connect through service ip.

nc -v 172.21.145.28 5000
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Connected to 172.21.145.28:5000.
hello
yes

I am wondering why and how to fix it.

che yang
  • 395
  • 3
  • 18
  • Do I need also add --masquerade-all=true ? – che yang Jan 15 '16 at 18:47
  • First please try these steps to debug your service: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/debugging-services.md. Then note that *something* needs to install masquerade rules. If you tell Kubernets to control the container bridge, it will do so, but this is probably not going to happen if you're running flannel without --ip-masq. Please respond once you've done this, with what exactly is failing from the debugging services doc. – Prashanth B Jan 16 '16 at 00:18

2 Answers2

3

I meet the same issue exactly, on Kubernetes 1.1.7 and 1.2.0. I start flannel without --ip-masq, and add parameter --masquerade-all=true for kube-proxy, it helps.

openxxs
  • 146
  • 3
1

According to kube-proxy in iptables mode is not working , you might have to add a route routing your service IP to the docker bridge.

Community
  • 1
  • 1
Heinzi
  • 5,793
  • 4
  • 40
  • 69