Configuring the istio-gateway with a service will create a kubernetes service with the given port configuration, which (as in a different answer already mentioned) isn't an istio concept, but a kubernetes one. So we need to take a look at the underlying kubernetes mechanisms.
The service that will be created is of type LoadBalancer by default. Also your cloud provider will create an external LoadBalancer that forwards traffic coming to it on a certain port to the cluster.
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
istio-ingressgateway LoadBalancer 10.107.158.5 1.2.3.4 15021:32562/TCP,80:30980/TCP,443:32013/TCP
You can see the internal ip of the service as well as the external ip of the external loadbalancer and in the PORT(S)
column that for example your port 80
is mapped to port 30980
. Behind the scenes kube-proxy takes your config and configures a bunch of iptables chains to set up routing of traffic to the ingress-gateway pods.
If you have access to a kubernetes host you can investigate those using the iptables command. First start with the KUBE-SERVICES
chain.
$ iptables -t nat -nL KUBE-SERVICES | grep ingressgateway
target prot opt source destination
KUBE-SVC-TFRZ6Y6WOLX5SOWZ tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:status-port cluster IP */ tcp dpt:15021
KUBE-FW-TFRZ6Y6WOLX5SOWZ tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:status-port loadbalancer IP */ tcp dpt:15021
KUBE-SVC-G6D3V5KS3PXPUEDS tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:http2 cluster IP */ tcp dpt:80
KUBE-FW-G6D3V5KS3PXPUEDS tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:http2 loadbalancer IP */ tcp dpt:80
KUBE-SVC-7N6LHPYFOVFT454K tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:https cluster IP */ tcp dpt:443
KUBE-FW-7N6LHPYFOVFT454K tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:https loadbalancer IP */ tcp dpt:443
You'll see that there are basically six chains, two for each port you defined: 80
, 433
and 15021
(on the far right).
The KUBE-SVC-*
are for cluster internal traffic, the KUBE-FW-*
for cluster external traffic. If you take a closer look you can see that the destination is the (external|internal) ip and one of the ports. So the traffic arriving on the node's network interface is for example for the destination 1.2.3.4:80. You can now follow down that chain, in my case KUBE-FW-G6D3V5KS3PXPUEDS
:
iptables -t nat -nL KUBE-FW-G6D3V5KS3PXPUEDS | grep KUBE-SVC
target prot opt source destination
KUBE-SVC-LBUWNFSUU3FNPZ7L all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 loadbalancer IP */
follow that one as well
$ iptables -t nat -nL KUBE-SVC-LBUWNFSUU3FNPZ7L | grep KUBE-SEP
target prot opt source destination
KUBE-SEP-RZL3ZLWSG2M7ZJYD all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */ statistic mode random probability 0.50000000000
KUBE-SEP-F7W3YTTYPP5NEPJ7 all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */
where you see the service endpoints, which are round robin loadbalanced by 50:50, and finally (choosing one of them):
$ iptables -t nat -nL KUBE-SEP-RZL3ZLWSG2M7ZJYD | grep DNAT
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */ tcp to:172.17.0.4:8080
where they end up being DNATed to 172.17.0.4:8080, which is the ip of one of the istio-ingressgateway pods ip and port 8080.
If you don't have access to a host/don't run in a public cloud environment, you wont have an external loadbalancer, so you wont find any KUBE-FW-*
chains (also the EXTERNAL-IP
in the service will stay in <pending>
). In that case you would be using <nodeip>:<nodeport>
to access the cluster from externally, for which iptables chains are also created. Run iptables -t nat -nL KUBE-NODEPORTS | grep istio-ingressgateway
which will also show you three KUBE-SVC-*
chains, which you can follow down to the DNAT as shown above.
So the targetport (like 8080
) is used to configure networking in kubernetes and also istio uses it to defines on which ports the ingressgateway pods bind to. You can kubectl describe pod <istio-ingressgateway-pod>
where you'll find the defined ports (8080
and 8443
) as container ports. Change them to whatever (above 1000) and they will change accordingly. Next you would apply a Gateway
with spec.servers
where you define those ports 8080,8443
to configure envoy (= istio-ingressgateway, the one you define with spec.selector
) to listen on those ports and a VirtualService
to define who to handle the received requests. See also my other answer on that topic.
Why didn't you initial config work? If you omit the targetport, istio will bind to the port you define (80
). That requires istio to run as root, otherwise the ingressgateway is unable to bind to ports below 1000. You can change it by setting values.gateways.istio-ingressgateway.runAsRoot=true
in the operator, also see the release note you mentioned. In that case the whole traffic flow from above would look exactly the same, except the ingressgateway pod would bind to 80,443
instead of 8080,8443
and the DNAT would <pod-ip>:(80|443)
instead of <pod-ip:(8080|8443)>
.
So you basically just misunderstood the release note: If you don't run the istio-ingressgateway pod as root you have to define the targetPort or alternatively omit the whole k8s.service
overlay (in that case istio will choose safe ports itself).
Note that I grep
ed for KUBE-SVC
, KUBE-SEP
and DNAT
. There will always be a bunch of KUBE-MARK-MASQ
and KUBE-MARK-DROP
that not really matter for now. If you want to learn more about the topic, there are some great articles about this out there, like this one.