2

I have the following manifest for deploying a IstIO egress gateway routing:

---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: REDACTED-egress-se
spec:
  hosts:
  - sahfpxa.REDACTED
  ports:
  - number: 8080
    name: http-port
    protocol: HTTP
  resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: sahfpxa-REDACTED-egress-gw
spec:
  selector:
    istio: egressgateway
  servers:
  - port:
      number: 8080
      name: http
      protocol: HTTP
    hosts:
    - sahfpxa.REDACTED
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: egressgateway-for-sahfpxa-REDACTED
spec:
  host: istio-egressgateway.istio-system.svc.cluster.local
  subsets:
  - name: sahfpxa
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: direct-sahfpxa-REDACTED-through-egress-gateway
spec:
  hosts:
  - sahfpxa.REDACTED
  gateways:
  - REDACTED/REDACTED-egress-gw
  - mesh
  http:
  - match:
    - gateways:
      - mesh
      port: 8080
    route:
    - destination:
        host: istio-egressgateway.istio-system.svc.cluster.local
        subset: sahfpxa
        port:
          number: 80
      weight: 100
  - match:
    - gateways:
      - REDACTED/sahfpxa-REDACTED-egress-gw
      port: 8080
    route:
    - destination:
        host: sahfpxa.REDACTED
        port:
          number: 8080
      weight: 100

But I get a connection refused from the sidecar istio-proxy container Pod of the affected namespace and a HTTP 503 error from the workload container in that namespace.

Any ideas what could be wrong with the configuration or how I can debug it?

Thanks in advance.

Best regards, rforberger

Ronny Forberger
  • 393
  • 1
  • 8
  • 23
  • Hi, 1. Can You be more specific with how You get the connection refused errors? What specific command did You use and from where? 2. Is the injected deployment pod accessible from within cluster/namespace? 3. What istio version do You have? – Piotr Malec Dec 04 '19 at 13:57
  • Hi @PiotrMalec 1. I get the connection refused from the envoy sidecar of my workload container from which I want to reach the external service sahfpxa.REDACTED through the egressgateway. 2. You mean if I can reach the injected deployment pod from another pod ? 3. Istio 1.4.0 (just upgraded, but the issue persists) – Ronny Forberger Dec 04 '19 at 15:01
  • Sorry I got a little bit confused about source and destinations services for this issue. So checking if the service is accessible withing cluster makes no sense. Instead check If the external service can be reached from cluster node. Have You tried using `curl` with `--verbose` option? Its `HTTP` protocol so there could be some useful information. – Piotr Malec Dec 04 '19 at 16:52
  • Hi @PiotrMalec The external service can be reached from the cluster node, also from the egressgateway pod. Curl --verbose shows the following: `* Trying 10.224.19.37:8080... * TCP_NODELAY set * Connected to sahfpxa.REDACTED (10.224.19.37) port 8080 (#0) > POST /REDACTED HTTP/1.1 > Host: sahfpxa.REDACTED:8080 > User-Agent: curl/7.66.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 503 Service Unavailable < content-length: 91 < content-type: text/plain < date: Wed, 04 Dec 2019 17:35:29 GMT < server: envoy < ` – Ronny Forberger Dec 04 '19 at 17:37
  • Try to add `location: MESH_EXTERNAL` under `spec` for `ServiceEntry` named `REDACTED-egress-se`. – Piotr Malec Dec 06 '19 at 14:10
  • Also If You have `mTLS `enabled, the connection to external resource will need destination rule if its not part of the istio mesh. – Piotr Malec Dec 06 '19 at 14:34
  • What do You get from `kubectl get configmap istio -n istio-system -o yaml | grep -m1 mode:`? – Piotr Malec Dec 06 '19 at 15:29
  • Hi @PiotrMalec thanks for your answer. Adding `location: MESH_EXTERNAL` under `spec` we already tried, with no success. Out mTLS is permissive, so do we need the destination rules then as well? We have a DR though. The command `kubectl get configmap istio -n istio-system -o yaml | grep -m1 mode:` returns `mode: ALLOW_ANY` – Ronny Forberger Dec 09 '19 at 09:58
  • I have tried deploying Your manifest and replaced redacted parts with other domain names, however i could not get it to work. Can You verify Your manifest by using: `istioctl x analyze -v manifest.yaml`? As for mTLS permissive mode it should be fine without destination rule in most cases. – Piotr Malec Dec 09 '19 at 16:09
  • @PiotrMalec I executed your command `istioctl x analyze -v manifest.yaml` and I'm getting `✔ No validation issues found.`. – Ronny Forberger Dec 09 '19 at 16:14

1 Answers1

3

There were few errors in Your deployment manifest like DestinationRule was not pointing at Your ServiceEntry.

You can try to match Yours with these manifest files:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: etth
spec:
  hosts:
  - etth.pl
  ports:
  - number: 8080
    name: http-port
    protocol: HTTP
  resolution: DNS
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-egressgateway
spec:
  selector:
    istio: egressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - etth.pl
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: egressgateway-for-cnn
spec:
  host: istio-egressgateway.istio-system.svc.cluster.local
  subsets:
  - name: etth
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: direct-cnn-through-egress-gateway
spec:
  hosts:
  - etth.pl
  gateways:
  - istio-egressgateway
  - mesh
  http:
  - match:
    - gateways:
      - mesh
      port: 80
    route:
    - destination:
        host: istio-egressgateway.istio-system.svc.cluster.local
        subset: etth
        port:
          number: 80
      weight: 100
  - match:
    - gateways:
      - istio-egressgateway
      port: 80
    route:
    - destination:
        host: etth.pl
        port:
          number: 8080
      weight: 100

You can check if routes are present in: istioctl pc routes $(kubectl get pods -l istio=egressgateway -o jsonpath='{.items[0].metadata.name}' -n istio-system).istio-system -o json

$ istioctl pc routes $(kubectl get pods -l istio=egressgateway -o jsonpath='{.items[0].metadata.name}' -n istio-system).istio-system -o json
[
    {
        "name": "http.80",
        "virtualHosts": [
            {
                "name": "etth.pl:80",
                "domains": [
                    "etth.pl",
                    "etth.pl:80"
                ],
                "routes": [
                    {
                        "match": {
                            "prefix": "/",
                            "caseSensitive": true
                        },
                        "route": {
                            "cluster": "outbound|8080||etth.pl",
                            "timeout": "0s",
                            "retryPolicy": {
                                "retryOn": "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted,retriable-status-codes",
                                "numRetries": 2,
                                "retryHostPredicate": [
                                    {
                                        "name": "envoy.retry_host_predicates.previous_hosts"
                                    }
                                ],
                                "hostSelectionRetryMaxAttempts": "5",
                                "retriableStatusCodes": [
                                    503
                                ]
                            },
                            "maxGrpcTimeout": "0s"
                        },
                        "metadata": {
                            "filterMetadata": {
                                "istio": {
                                    "config": "/apis/networking/v1alpha3/namespaces/default/virtual-service/direct-cnn-through-egress-gateway"
                                }
                            }
                        },
                        "decorator": {
                            "operation": "etth.pl:8080/*"
                        },
                        "typedPerFilterConfig": {
                            "mixer": {
                                "@type": "type.googleapis.com/istio.mixer.v1.config.client.ServiceConfig",
                                "disableCheckCalls": true,
                                "mixerAttributes": {
                                    "attributes": {
                                        "destination.service.host": {
                                            "stringValue": "etth.pl"
                                        },
                                        "destination.service.name": {
                                            "stringValue": "etth.pl"
                                        },
                                        "destination.service.namespace": {
                                            "stringValue": "default"
                                        }
                                    }
                                },
                                "forwardAttributes": {
                                    "attributes": {
                                        "destination.service.host": {
                                            "stringValue": "etth.pl"
                                        },
                                        "destination.service.name": {
                                            "stringValue": "etth.pl"
                                        },
                                        "destination.service.namespace": {
                                            "stringValue": "default"
                                        }
                                    }
                                }
                            }
                        }
                    }
                ]
            }
        ],
        "validateClusters": false
    },
    {
        "virtualHosts": [
            {
                "name": "backend",
                "domains": [
                    "*"
                ],
                "routes": [
                    {
                        "match": {
                            "prefix": "/stats/prometheus"
                        },
                        "route": {
                            "cluster": "prometheus_stats"
                        }
                    }
                ]
            }
        ]
    }
]
Piotr Malec
  • 3,429
  • 11
  • 16
  • Hi @piotrmalec I tried out your suggested example exactly like you posted it with the fixed REDACTED parts, but still getting a HTTP 503 error. I see the routes using `istioctl pc routes $(kubectl get pods -l istio=egressgateway -o jsonpath='{.items[0].metadata.name}' -n istio-system).istio-system -o json` . Example: ` ... "route": { "cluster": "outbound|8080||sahfpxa.REDACTED", "timeout": "0s", ... ` – Ronny Forberger Dec 10 '19 at 13:38
  • If you remove all the objects created with this manifest, do you also get 503 error? In my istio cluster if I don't have any service entries defined and the cluster policy is `mode: ALLOW_ANY` i can reach all external services on any port. For example: from app pod that is injected with envoy i can do `curl -v http://10.240.0.11:1337/` which is vm next to my cluster in the same VPC network hosting helloworld on 1337 port. If not there might be something blocking Your connectivity from cluster. – Piotr Malec Dec 10 '19 at 16:09
  • well if I remove all the manifests, I still get a HTTP 503 error. Though when I do the curl command directly from our kubernetes masters, I get a HTTP 200 code back from the external service. So actually connectivity should not be blocking... – Ronny Forberger Dec 10 '19 at 16:26
  • Seems like firewall issue. Are you using `firewalld`? – Piotr Malec Dec 11 '19 at 11:55
  • We are not using firewalld, it's disabled on the nodes. But there are some iptables rules deployed, those ones kubernetes/weave network layer deployed automatically. – Ronny Forberger Dec 11 '19 at 12:38
  • Is the istio-egressgateway pod running? Verify with `kubectl get pods -n istio-system`. Mine looks like this: `istio-egressgateway-7cb7fdff55-z5c5q 1/1 Running 0 3m30s` – Piotr Malec Dec 11 '19 at 13:28
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/204057/discussion-between-piotr-malec-and-ronny-forberger). – Piotr Malec Dec 11 '19 at 13:31
  • in case i m using ingress gateway with port 31400, how should i create the egress gateway and the service entry ? – Tiago Medici Jul 27 '20 at 15:57