1

Question

I am trying to learn Istio and I am setting up my Istio Ingress-Gateway. When I set that up, there are the following port options (as indicated here):

  • Port
  • NodePort
  • TargetPort

NodePort makes sense to me. That is the port that the Ingress-Gateway will listen to on each worker node in the Kubernetes cluster. Requests that hit there are going to route into the Kubernetes cluster using the Ingress Gateway CRDs.

In the examples, Port is usually set to the common port for its matching traffic (80 for http, and 443 for https, etc). I don't understand what Istio needs this port for, as I don't see any traffic using anything but the NodePort.

TargetPort is a mystery to me. I have seen some documentation on it for normal Istio Gateways (that says it is only applicable when using ServiceEntries), but nothing that makes sense for an Ingress-Gateway.

My question is this, in relation to an Ingress-Gateway (not a normal Gateway) what is a TargetPort?

More Details

In the end, I am trying to debug why my ingress traffic is getting a "connection refused" response.

I setup my Istio Operator following this tutorial with this configuration:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio-controlplane
  namespace: istio-system
spec:
  components:    
    ingressGateways:
    - enabled: true
      k8s:
        service:
          ports:
          - name: http2
            port: 80
            nodePort: 30980            
        hpaSpec:
          minReplicas: 2
      name: istio-ingressgateway
    pilot:
      enabled: true
      k8s:
        hpaSpec:
          minReplicas: 2
  profile: default

I omitted the TargetPort from my config because I found this release notes that said that Istio will pick safe defaults.

With that I tried to follow the steps found in this tutorial.

I tried the curl command indicated in that tutorial:

curl -s -I -H Host:httpbin.example.com "http://10.20.30.40:30980/status/200"

I got the response of Failed to connect to 10.20.30.40 port 30980: Connection refused

But I can ping 10.20.30.40 fine, and the command to get the NodePort returns 30980.

So I got to thinking that maybe this is an issue with the TargetPort setting that I don't understand.

A check of the istiod logs hinted that I may be on the right track. I ran:

kubectl logs -n istio-system -l app=istiod

and among the logs I found:

warn    buildGatewayListeners: skipping privileged gateway port 80 for node istio-ingressgateway-dc748bc9-q44j7.istio-system as it is an unprivileged pod
warn    gateway has zero listeners for node istio-ingressgateway-dc748bc9-q44j7.istio-system

So, if you got this far, then WOW! I thank you for reading it all. If you have any suggestions on what I need to set TargetPort to, or if I am missing something else, I would love to hear it.

Vaccano
  • 78,325
  • 149
  • 468
  • 850
  • I tried to simplify the question, see https://stackoverflow.com/questions/74117580/in-istio-ingress-gateway-how-istio-proxy-figures-out-the-used-service-port – Aladdin Oct 18 '22 at 21:19

3 Answers3

2

Port, Nodeport and TargetPort are not Istio concepts, but Kubernetes ones, more specifically of Kubernetes Services, which is why there is no detailed description of that in the Istio Operator API.

The Istio Operator API exposes the options to configure the (Kubernetes) Service of the Ingress Gateway.

For a description of those concepts, see the documentation for Kubernetes Service.

See also Difference between targetPort and port in Kubernetes Service definition

So the target port is where the containers of the Pod of the Ingress Gateway receive their traffic.

Therefore I think, that the configuration of ports and target ports is application specific and the mapping 80->8080 is more or less arbitrary, i.e. a "decision" of the application.

Additional details:

The Istio Operator describes the Ingress Gateway, which itself consists of a Kubernetes Service and a Kubernetes Deployment. Usually it is deployed in istio-system. You can inspect the Kubernetes Service of istio-ingressgateway and it will match the specification of that YAML.

Therefore the Istio Ingress Gateway is actually talking to its containers. However, this is mostly an implementation detail of the Istio Ingress Gateway and is not related to a Service and a VirtualService which you define for your apps.

The Ingressgateway is itself a Service and receives traffic on the port you define (i.e. 80) and forwards it to 8080 on its containers. Then it processes the traffic according to the rules which are configured by Gateways and VirtualServices and sends it to the Service of the application.

user140547
  • 7,750
  • 3
  • 28
  • 80
  • That is confusing to me, because the Istio Virtual Service is what connects to the Kubernetes service. Traffic passes from the Istio Ingress Gateway through to a normal Istio Gateway and then on to a Istio Virtual Service before it gets to a container. I thought it was the job of the Virtual Service to connect with the Kubernetes service (including port number in the container via the `destination` section of the yaml). If the Istio Ingress Gateway is not talking to a Kubernetes container, what is the TargetPort used for on the Istio Ingress Gateway? – Vaccano May 25 '21 at 14:57
  • @Vaccano added a section with additional details – user140547 May 25 '21 at 15:45
  • So the NodePort is for traffic to my services/pods, but the TargetPort is for traffic to the Ingress Gateway's containers? Seems really odd to configure both of those in the exact same section of the yaml. (I would think you would not force configuring the internal communication of the ingress controller when configuring the NodePort.) – Vaccano May 25 '21 at 15:52
  • No, the NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). For a LoadBalancer: NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created... So the NodePort is the port by which the Service can be reached from outside the cluster and the Loadbalancer uses that Port for routing So I guess you don't even need to explicitly configure the NodePort, Port and Targetport should be enough. – user140547 May 25 '21 at 16:09
0

I still don't really understand what TargetPort is doing, but I got the tutorial working.

I went back an uninstalled Istio (by deleting the operator configuration and then the istio namespaces). I then re-installed it but I took the part of my configuration out that specified the node port.

I then ran a kubectl get namespace istio-ingressgateway -o yaml -n istio-system. That showed me what the istio ingress gateway was using as its defaults for the port. I then went and updated my yaml for the operator to match (except for my desired custom NodePort). That worked.

In the end, the yaml looked like this:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio-controlplane
  namespace: istio-system
spec:
  components:    
    ingressGateways:
    - enabled: true
      k8s:
        service:
          ports:
          - name: status-port
            nodePort: 32562
            port: 15021
            protocol: TCP
            targetPort: 15021
          - name: http2
            nodePort: 30980
            port: 80
            protocol: TCP
            targetPort: 8080
          - name: https
            nodePort: 32013
            port: 443
            protocol: TCP
            targetPort: 8443           
        hpaSpec:
          minReplicas: 2
      name: istio-ingressgateway
    pilot:
      enabled: true
      k8s:
        hpaSpec:
          minReplicas: 2
  profile: default

I would still like to understand what the TargetPort is doing. So if anyone can answer with that (again, in context of the Istio Ingress Gateway service (not an istio gateway)), then I will accept that answer.

Vaccano
  • 78,325
  • 149
  • 468
  • 850
  • https://stackoverflow.com/a/55183861/229247 "Difference between targetPort and port in Kubernetes Service definition" – Paul Wheeler Jun 10 '22 at 03:49
0

Configuring the istio-gateway with a service will create a kubernetes service with the given port configuration, which (as in a different answer already mentioned) isn't an istio concept, but a kubernetes one. So we need to take a look at the underlying kubernetes mechanisms.

The service that will be created is of type LoadBalancer by default. Also your cloud provider will create an external LoadBalancer that forwards traffic coming to it on a certain port to the cluster.

$ kubectl get svc -n istio-system
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                  
istio-ingressgateway   LoadBalancer   10.107.158.5   1.2.3.4       15021:32562/TCP,80:30980/TCP,443:32013/TCP

You can see the internal ip of the service as well as the external ip of the external loadbalancer and in the PORT(S) column that for example your port 80 is mapped to port 30980. Behind the scenes kube-proxy takes your config and configures a bunch of iptables chains to set up routing of traffic to the ingress-gateway pods.

If you have access to a kubernetes host you can investigate those using the iptables command. First start with the KUBE-SERVICES chain.

$ iptables -t nat -nL KUBE-SERVICES | grep ingressgateway
target                     prot opt source        destination
KUBE-SVC-TFRZ6Y6WOLX5SOWZ  tcp  --  0.0.0.0/0     10.107.158.5    /* istio-system/istio-ingressgateway:status-port cluster IP */ tcp dpt:15021
KUBE-FW-TFRZ6Y6WOLX5SOWZ   tcp  --  0.0.0.0/0     1.2.3.4         /* istio-system/istio-ingressgateway:status-port loadbalancer IP */ tcp dpt:15021
KUBE-SVC-G6D3V5KS3PXPUEDS  tcp  --  0.0.0.0/0     10.107.158.5    /* istio-system/istio-ingressgateway:http2 cluster IP */ tcp dpt:80
KUBE-FW-G6D3V5KS3PXPUEDS   tcp  --  0.0.0.0/0     1.2.3.4         /* istio-system/istio-ingressgateway:http2 loadbalancer IP */ tcp dpt:80
KUBE-SVC-7N6LHPYFOVFT454K  tcp  --  0.0.0.0/0     10.107.158.5    /* istio-system/istio-ingressgateway:https cluster IP */ tcp dpt:443
KUBE-FW-7N6LHPYFOVFT454K   tcp  --  0.0.0.0/0     1.2.3.4         /* istio-system/istio-ingressgateway:https loadbalancer IP */ tcp dpt:443

You'll see that there are basically six chains, two for each port you defined: 80, 433 and 15021 (on the far right).

The KUBE-SVC-* are for cluster internal traffic, the KUBE-FW-* for cluster external traffic. If you take a closer look you can see that the destination is the (external|internal) ip and one of the ports. So the traffic arriving on the node's network interface is for example for the destination 1.2.3.4:80. You can now follow down that chain, in my case KUBE-FW-G6D3V5KS3PXPUEDS:

iptables -t nat -nL KUBE-FW-G6D3V5KS3PXPUEDS | grep KUBE-SVC
target                     prot opt source       destination
KUBE-SVC-LBUWNFSUU3FNPZ7L  all  --  0.0.0.0/0    0.0.0.0/0    /* istio-system/istio-ingressgateway:http2 loadbalancer IP */

follow that one as well

$ iptables -t nat -nL KUBE-SVC-LBUWNFSUU3FNPZ7L | grep KUBE-SEP
target                     prot opt source       destination
KUBE-SEP-RZL3ZLWSG2M7ZJYD  all  --  0.0.0.0/0    0.0.0.0/0     /* istio-system/istio-ingressgateway:http2 */ statistic mode random probability 0.50000000000
KUBE-SEP-F7W3YTTYPP5NEPJ7  all  --  0.0.0.0/0    0.0.0.0/0     /* istio-system/istio-ingressgateway:http2 */

where you see the service endpoints, which are round robin loadbalanced by 50:50, and finally (choosing one of them):

$ iptables -t nat -nL KUBE-SEP-RZL3ZLWSG2M7ZJYD | grep DNAT
target     prot opt source       destination
DNAT       tcp  --  0.0.0.0/0    0.0.0.0/0    /* istio-system/istio-ingressgateway:http2 */ tcp to:172.17.0.4:8080

where they end up being DNATed to 172.17.0.4:8080, which is the ip of one of the istio-ingressgateway pods ip and port 8080.

If you don't have access to a host/don't run in a public cloud environment, you wont have an external loadbalancer, so you wont find any KUBE-FW-* chains (also the EXTERNAL-IP in the service will stay in <pending>). In that case you would be using <nodeip>:<nodeport> to access the cluster from externally, for which iptables chains are also created. Run iptables -t nat -nL KUBE-NODEPORTS | grep istio-ingressgateway which will also show you three KUBE-SVC-* chains, which you can follow down to the DNAT as shown above.

So the targetport (like 8080) is used to configure networking in kubernetes and also istio uses it to defines on which ports the ingressgateway pods bind to. You can kubectl describe pod <istio-ingressgateway-pod> where you'll find the defined ports (8080 and 8443) as container ports. Change them to whatever (above 1000) and they will change accordingly. Next you would apply a Gateway with spec.servers where you define those ports 8080,8443 to configure envoy (= istio-ingressgateway, the one you define with spec.selector) to listen on those ports and a VirtualService to define who to handle the received requests. See also my other answer on that topic.

Why didn't you initial config work? If you omit the targetport, istio will bind to the port you define (80). That requires istio to run as root, otherwise the ingressgateway is unable to bind to ports below 1000. You can change it by setting values.gateways.istio-ingressgateway.runAsRoot=true in the operator, also see the release note you mentioned. In that case the whole traffic flow from above would look exactly the same, except the ingressgateway pod would bind to 80,443 instead of 8080,8443 and the DNAT would <pod-ip>:(80|443) instead of <pod-ip:(8080|8443)>.

So you basically just misunderstood the release note: If you don't run the istio-ingressgateway pod as root you have to define the targetPort or alternatively omit the whole k8s.service overlay (in that case istio will choose safe ports itself).

Note that I greped for KUBE-SVC, KUBE-SEP and DNAT. There will always be a bunch of KUBE-MARK-MASQ and KUBE-MARK-DROP that not really matter for now. If you want to learn more about the topic, there are some great articles about this out there, like this one.

Chris
  • 5,109
  • 3
  • 19
  • 40