2

Here're the steps to reproduce the error:

1). Install an AWS EKS cluster (1.11)

2). Install Cilium v1.4.1 following this guide

$ kubectl -n kube-system set env ds aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true

$ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.4/examples/kubernetes/1.11/cilium.yaml

3). Install istio 1.0.6

$ kubectl apply -f install/kubernetes/helm/helm-service-account.yaml

$ helm init --service-account tiller

$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system

4). Try sample nginx

$ kubectl create ns nginx

$ kubectl label namespace nginx istio-injection=enabled

$ kubectl create deployment --image nginx nginx -n nginx

$ kubectl expose deployment nginx --port=80 --type=LoadBalancer -n nginx

Run into the problem

$ kubectl get deploy -n nginx
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx   1         0         0            0           27m

$ kubectl get deploy -n nginx -oyaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
      traffic.sidecar.istio.io/includeOutboundIPRanges: 172.20.0.0/16
    creationTimestamp: "2019-03-08T13:13:58Z"
    generation: 3
    labels:
      app: nginx
    name: nginx
    namespace: nginx
    resourceVersion: "36034"
    selfLink: /apis/extensions/v1beta1/namespaces/nginx/deployments/nginx
    uid: 0888b279-41a4-11e9-8f26-1274e185a192
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        app: nginx
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: nginx
      spec:
        containers:
        - image: nginx
          imagePullPolicy: Always
          name: nginx
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
  status:
    conditions:
    - lastTransitionTime: "2019-03-08T13:13:58Z"
      lastUpdateTime: "2019-03-08T13:13:58Z"
      message: Deployment does not have minimum availability.
      reason: MinimumReplicasUnavailable
      status: "False"
      type: Available
    - lastTransitionTime: "2019-03-08T13:13:58Z"
      lastUpdateTime: "2019-03-08T13:13:58Z"
      message: 'Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io":
        Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s:
        Address is not allowed'
      reason: FailedCreate
      status: "True"
      type: ReplicaFailure
    - lastTransitionTime: "2019-03-08T13:23:59Z"
      lastUpdateTime: "2019-03-08T13:23:59Z"
      message: ReplicaSet "nginx-78f5d695bd" has timed out progressing.
      reason: ProgressDeadlineExceeded
      status: "False"
      type: Progressing
    observedGeneration: 3
    unavailableReplicas: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Investigation A, Updated includeOutboundIPRanges annotation as follows, not helping

$ kubectl edit deploy -n nginx
  annotations:
    traffic.sidecar.istio.io/includeOutboundIPRanges: 172.20.0.0/20

Investigation B, Removed Cilium, re-install istio, then re-install nginx. Nginx injection becomes fine, Nginx pod runs well.

Investigation C, As a comparison, I switched the install step between 2). and 3)., Nginx injection will be fine, the nginx welcome page can be seen. But this "Address is not allowed" error would come out again after "terminating manually" ec2 worker instances - ASG auto-create all ec2 worker instances.

FYI, cilium and istio status

$ kubectl -n kube-system exec -ti cilium-4wzgd cilium-health status
Probe time:   2019-03-08T16:35:57Z
Nodes:
  ip-10-250-206-54.ec2.internal (localhost):
    Host connectivity to 10.250.206.54:
      ICMP to stack:   OK, RTT=440.788µs
      HTTP to agent:   OK, RTT=665.779µs
  ip-10-250-198-72.ec2.internal:
    Host connectivity to 10.250.198.72:
      ICMP to stack:   OK, RTT=799.994µs
      HTTP to agent:   OK, RTT=1.594971ms
  ip-10-250-199-154.ec2.internal:
    Host connectivity to 10.250.199.154:
      ICMP to stack:   OK, RTT=770.777µs
      HTTP to agent:   OK, RTT=1.692356ms
  ip-10-250-205-177.ec2.internal:
    Host connectivity to 10.250.205.177:
      ICMP to stack:   OK, RTT=460.927µs
      HTTP to agent:   OK, RTT=1.383852ms
  ip-10-250-213-68.ec2.internal:
    Host connectivity to 10.250.213.68:
      ICMP to stack:   OK, RTT=766.769µs
      HTTP to agent:   OK, RTT=1.401989ms
  ip-10-250-214-179.ec2.internal:
    Host connectivity to 10.250.214.179:
      ICMP to stack:   OK, RTT=781.72µs
      HTTP to agent:   OK, RTT=2.614356ms

$ kubectl -n kube-system exec -ti cilium-4wzgd -- cilium status
KVStore:                Ok   etcd: 1/1 connected: https://cilium-etcd-client.kube-system.svc:2379 - 3.3.11 (Leader)
ContainerRuntime:       Ok   docker daemon: OK
Kubernetes:             Ok   1.11+ (v1.11.5-eks-6bad6d) [linux/amd64]
Kubernetes APIs:        ["CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
Cilium:                 Ok   OK
NodeMonitor:            Disabled
Cilium health daemon:   Ok   
IPv4 address pool:      6/65535 allocated from 10.54.0.0/16
Controller Status:      34/34 healthy
Proxy Status:           OK, ip 10.54.0.1, port-range 10000-20000
Cluster health:   6/6 reachable   (2019-03-08T16:36:57Z)

$ kubectl get namespace -L istio-injection
NAME           STATUS   AGE   ISTIO-INJECTION
default        Active   4h    
istio-system   Active   4m    
kube-public    Active   4h    
kube-system    Active   4h    
nginx          Active   4h    enabled

$ for pod in $(kubectl -n istio-system get pod -listio=sidecar-injector -o jsonpath='{.items[*].metadata.name}'); do kubectl -n istio-system logs ${pod}; done
2019-03-08T16:35:02.948778Z info    version root@464fc845-2bf8-11e9-b805-0a580a2c0506-docker.io/istio-1.0.6-98598f88f6ee9c1e6b3f03b652d8e0e3cd114fa2-dirty-Modified
2019-03-08T16:35:02.950343Z info    New configuration: sha256sum cf9491065c492014f0cb69c8140a415f0f435a81d2135efbfbab070cf6f16554
2019-03-08T16:35:02.950377Z info    Policy: enabled
2019-03-08T16:35:02.950398Z info    Template: |
  initContainers:
  - name: istio-init
    image: "docker.io/istio/proxy_init:1.0.6"
    args:
    - "-p"
    - [[ .MeshConfig.ProxyListenPort ]]
    - "-u"
    - 1337
    - "-m"
    - [[ annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode ]]
    - "-i"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeOutboundIPRanges`  "172.20.0.0/16"  ]]"
    - "-x"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/excludeOutboundIPRanges`  ""  ]]"
    - "-b"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeInboundPorts` (includeInboundPorts .Spec.Containers) ]]"
    - "-d"
    - "[[ excludeInboundPort (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) (annotation .ObjectMeta `traffic.sidecar.istio.io/excludeInboundPorts`  "" ) ]]"
    imagePullPolicy: IfNotPresent
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    restartPolicy: Always
  containers:
  - name: istio-proxy
    image: [[ annotation .ObjectMeta `sidecar.istio.io/proxyImage`  "docker.io/istio/proxyv2:1.0.6"  ]]

    ports:
    - containerPort: 15090
      protocol: TCP
      name: http-envoy-prom

    args:
    - proxy
    - sidecar
    - --configPath
    - [[ .ProxyConfig.ConfigPath ]]
    - --binaryPath
    - [[ .ProxyConfig.BinaryPath ]]
    - --serviceCluster
    [[ if ne "" (index .ObjectMeta.Labels "app") -]]
    - [[ index .ObjectMeta.Labels "app" ]]
    [[ else -]]
    - "istio-proxy"
    [[ end -]]
    - --drainDuration
    - [[ formatDuration .ProxyConfig.DrainDuration ]]
    - --parentShutdownDuration
    - [[ formatDuration .ProxyConfig.ParentShutdownDuration ]]
    - --discoveryAddress
    - [[ annotation .ObjectMeta `sidecar.istio.io/discoveryAddress` .ProxyConfig.DiscoveryAddress ]]
    - --discoveryRefreshDelay
    - [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]]
    - --zipkinAddress
    - [[ .ProxyConfig.ZipkinAddress ]]
    - --connectTimeout
    - [[ formatDuration .ProxyConfig.ConnectTimeout ]]
    - --proxyAdminPort
    - [[ .ProxyConfig.ProxyAdminPort ]]
    [[ if gt .ProxyConfig.Concurrency 0 -]]
    - --concurrency
    - [[ .ProxyConfig.Concurrency ]]
    [[ end -]]
    - --controlPlaneAuthPolicy
    - [[ annotation .ObjectMeta `sidecar.istio.io/controlPlaneAuthPolicy` .ProxyConfig.ControlPlaneAuthPolicy ]]
  [[- if (ne (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) "0") ]]
    - --statusPort
    - [[ annotation .ObjectMeta `status.sidecar.istio.io/port`  0  ]]
    - --applicationPorts
    - "[[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/applicationPorts` (applicationPorts .Spec.Containers) ]]"
  [[- end ]]
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
    [[ if .ObjectMeta.Annotations ]]
    - name: ISTIO_METAJSON_ANNOTATIONS
      value: |
             [[ toJson .ObjectMeta.Annotations ]]
    [[ end ]]
    [[ if .ObjectMeta.Labels ]]
    - name: ISTIO_METAJSON_LABELS
      value: |
             [[ toJson .ObjectMeta.Labels ]]
    [[ end ]]
    imagePullPolicy: IfNotPresent
    [[ if (ne (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) "0") ]]
    readinessProbe:
      httpGet:
        path: /healthz/ready
        port: [[ annotation .ObjectMeta `status.sidecar.istio.io/port`  0  ]]
      initialDelaySeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/initialDelaySeconds`  1  ]]
      periodSeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/periodSeconds`  2  ]]
      failureThreshold: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/failureThreshold`  30  ]]
    [[ end -]]securityContext:

      readOnlyRootFilesystem: true
      [[ if eq (annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode) "TPROXY" -]]
      capabilities:
        add:
        - NET_ADMIN
      runAsGroup: 1337
      [[ else -]]
      runAsUser: 1337
      [[ end -]]
    restartPolicy: Always
    resources:
      [[ if (isset .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU`) -]]
      requests:
        cpu: "[[ index .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU` ]]"
        memory: "[[ index .ObjectMeta.Annotations `sidecar.istio.io/proxyMemory` ]]"
    [[ else -]]
      requests:
        cpu: 10m

    [[ end -]]
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  volumes:
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      optional: true
      [[ if eq .Spec.ServiceAccountName "" -]]
      secretName: istio.default
      [[ else -]]
      secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]]
      [[ end -]]

$ kubectl get svc 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   5h

$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}

$ kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
ip-*1.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*2.ec2.internal   Ready    <none>   5h    v1.11.5
ip-*3.ec2.internal   Ready    <none>   5h    v1.11.5
ip-*4.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*5.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*6.ec2.internal   Ready    <none>   5h    v1.11.5

$ kubectl get pods --all-namespaces
NAMESPACE      NAME                                      READY   STATUS    RESTARTS   AGE
istio-system   istio-citadel-796c94878b-jt5tb            1/1     Running   0          13m
istio-system   istio-egressgateway-864444d6ff-vwptk      1/1     Running   0          13m
istio-system   istio-galley-6c68c5dbcf-fmtvp             1/1     Running   0          13m
istio-system   istio-ingressgateway-694576c7bb-kmk8k     1/1     Running   0          13m
istio-system   istio-pilot-79f5f46dd5-kbr45              2/2     Running   0          13m
istio-system   istio-policy-5bd5578b94-qzzhd             2/2     Running   0          13m
istio-system   istio-sidecar-injector-6d8f88c98f-slr6x   1/1     Running   0          13m
istio-system   istio-telemetry-5598f86cd8-z7kr5          2/2     Running   0          13m
istio-system   prometheus-76db5fddd5-hw9pb               1/1     Running   0          13m
kube-system    aws-node-5wv4g                            1/1     Running   0          4h
kube-system    aws-node-gsf7l                            1/1     Running   0          4h
kube-system    aws-node-ksddt                            1/1     Running   0          4h
kube-system    aws-node-lszrr                            1/1     Running   0          4h
kube-system    aws-node-r4gcg                            1/1     Running   0          4h
kube-system    aws-node-wtcvj                            1/1     Running   0          4h
kube-system    cilium-4wzgd                              1/1     Running   0          4h
kube-system    cilium-56sq5                              1/1     Running   0          4h
kube-system    cilium-etcd-4vndb7tl6w                    1/1     Running   0          4h
kube-system    cilium-etcd-operator-6d9975f5df-zcb5r     1/1     Running   0          4h
kube-system    cilium-etcd-r9h4txhgld                    1/1     Running   0          4h
kube-system    cilium-etcd-t2fldlwxzh                    1/1     Running   0          4h
kube-system    cilium-fkx8d                              1/1     Running   0          4h
kube-system    cilium-glc8l                              1/1     Running   0          4h
kube-system    cilium-gvm5f                              1/1     Running   0          4h
kube-system    cilium-jscn8                              1/1     Running   0          4h
kube-system    cilium-operator-7df75f5cc8-tnv54          1/1     Running   0          4h
kube-system    coredns-7bcbfc4774-fr59z                  1/1     Running   0          5h
kube-system    coredns-7bcbfc4774-xxwbg                  1/1     Running   0          5h
kube-system    etcd-operator-7b9768bc99-8fxf2            1/1     Running   0          4h
kube-system    kube-proxy-bprmp                          1/1     Running   0          5h
kube-system    kube-proxy-ccb2q                          1/1     Running   0          5h
kube-system    kube-proxy-dv2mn                          1/1     Running   0          5h
kube-system    kube-proxy-qds2r                          1/1     Running   0          5h
kube-system    kube-proxy-rf466                          1/1     Running   0          5h
kube-system    kube-proxy-rz2ck                          1/1     Running   0          5h
kube-system    tiller-deploy-57c574bfb8-cd6rn            1/1     Running   0          4h
brant4test
  • 29
  • 1
  • Just tried cilium:v1.4.2, same error – brant4test Mar 10 '19 at 15:42
  • Met another "failed calling admission webhook" issue in EKS cluster. ``` $ helm install charts/gateway -n gateway --namespace istio-system -f config/test1a.yaml, Got Error: release gateway failed: Internal error occurred: failed calling admission webhook "pilot.validation.istio.io": Post https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s: Address is not allowed ``` – brant4test Mar 12 '19 at 13:16
  • A related question on cilium-etcd-operator repo. ``` EKS 1.11 eks.2 + Istio 1.1.1 + Cilium 1.4.2, Automatic sidecar injection failed: Address is not allowed, failed calling admission webhook "sidecar-injector.istio.io" ``` https://github.com/cilium/cilium-etcd-operator/issues/65 – brant4test Apr 01 '19 at 15:01

2 Answers2

2

I ran in the same issue with Calico as a CNI on EKS, this is surely related to this. After installing istio I get this error:

Internal error occurred: failed calling admission webhook \"mixer.validation.istio.io\": Post https://istio-galley.istio-system.svc:443/admitmixer?timeout=30s: Address is not allowed

My theory is : This is due to the fact that the Calico CNI is present only on my worker nodes (Pods CIDR is 192.168.../16) and the control plane still run the AWS CNI as I don't have control over this with EKS.

Meaning that the webhook (running from the control plane) isn't allowed to communicate with my service istio-galley.istio-system.svc having an IP outside of the VPC.

0

I ran the same issue after try to migrate aws vpc cni to cilium.

I have added hostNetwork:true under spec.template.spec to my istiod Deployment and it solved my issued.

Jay Sithiporn
  • 91
  • 5
  • 14