1

I've installed kubernetes 1.2.0 with the following configuration

export nodes="user@10.0.0.30 user@10.0.0.32"
export role="ai i"
export NUM_NODES=2
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16
export KUBE_PROXY_EXTRA_OPTS="--proxy-mode=iptables"

I've created a nginx pod and expose with load balancer and external IP address

kubectl expose pod my-nginx-3800858182-6qhap --external-ip=10.0.0.50 --port=80 --target-port=80

I'm using kubernetes on bare metal so i've assigned 10.0.0.50 ip to master node.

If i try curl 10.0.0.50 (from outside kubernetes) and use tcpdump on nginx pod i see traffic, the source ip is always from the kubernetes master node

17:30:55.470230 IP 172.16.60.1.43030 > 172.16.60.2.80: ...
17:30:55.470343 IP 172.16.60.2.80 > 172.16.60.1.43030: ...

i'm using mode-proxy=iptables. and need to get the actual source ip. what am i doing wrong ?

Jose
  • 71
  • 3
  • 14

2 Answers2

1

You're not doing anything wrong, unfortunately. It's an artifact of how packets are proxied from the machine that receives them to the destination container.

There's been a bunch of discussion around the problem in a very long Github issue, but no solutions found yet other than running your front-end load balancer outside of the Kubernetes cluster (like using a cloud load balancer, which attach the X-FORWARDED-FOR header).

Alex Robinson
  • 12,633
  • 2
  • 38
  • 55
  • 1
    for example in AWS with an elastic load balancer ? – Jose May 02 '16 at 19:14
  • If you use the Ingress resource for managing HTTP load balancers, which apparently isn't quite ready yet for AWS: https://github.com/kubernetes/contrib/issues/346 – Alex Robinson May 02 '16 at 21:59
1

This was added as an annotation in Kubernetes 1.5 (docs here).

In 1.7, it has graduated to GA, so you can specify the load balancing policy on a Service with spec.externalTrafficPolicy field (docs here):

{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "example-service",
  },
  "spec": {
    "ports": [{
      "port": 8765,
      "targetPort": 9376
    }],
    "selector": {
      "app": "example"
    },
    "type": "LoadBalancer",
    "externalTrafficPolicy": "Local"
  }
}
Symmetric
  • 4,495
  • 5
  • 32
  • 50
  • I'm using 1.7.3, but met timeout issue if externalTrafficPolicy is set to Local. Could you please kindly take a look at my question [here](https://stackoverflow.com/questions/47345327/why-unable-to-access-a-service-if-setting-externaltrafficpolicy-to-local-in-a-ku) – Tan Jinfu Nov 17 '17 at 09:39
  • @TanJinfu when externalTrafficPolicy is set to Local the request is forwarded to the pod only if available in the same worker node. So you have to make sure that the pod runs in every worker node using node affinitiy to avoid timeout errors. – Arunagiriswaran Ezhilan Nov 24 '19 at 16:00