3

Our K8 cluster was working for more than a year, recently it got some strange behavior and now when we deploy an app using kubectl apply -f deployment-manifest.yaml, it doesnt show in kubectl get pods. But shows in kubectl get deployments with 0/3 state. kubectl describe deployment app-deployment

Conditions:
  Type             Status  Reason
  ----             ------  ------
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate
  Progressing      False   ProgressDeadlineExceeded

When I check kube-apiserver logs

I1115 12:55:56.110277       1 trace.go:116] Trace[16922026]: "Call validating webhook" configuration:istiod-istio-system,webhook:validation.istio.io,resource:networking.istio.io/v1alpha3, Resource=gateways,subresource:,operation:CREATE,UID:00c425da-6475-4ed3-bc25-5a81d866baf2 (started: 2021-11-15 12:55:26.109897413 +0000 UTC m=+8229.935658158) (total time: 30.00030708s):
Trace[16922026]: [30.00030708s] [30.00030708s] END
W1115 12:55:56.110327       1 dispatcher.go:128] Failed calling webhook, failing open validation.istio.io: failed calling webhook "validation.istio.io": Post https://istiod.istio-system.svc:443/validate?timeout=30s: dial tcp 10.233.30.109:443: i/o timeout
E1115 12:55:56.110363       1 dispatcher.go:129] failed calling webhook "validation.istio.io": Post https://istiod.istio-system.svc:443/validate?timeout=30s: dial tcp 10.233.30.109:443: i/o timeout
I1115 12:55:56.121271       1 trace.go:116] Trace[576910507]: "Create" url:/apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways,user-agent:pilot-discovery/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.1.16 (started: 2021-11-15 12:55:26.108861126 +0000 UTC m=+8229.934621868) (total time: 30.012357263s):

Kube-controller logs

I1116 07:55:06.218995       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"ops-executor-app-6647b7cbdb", UID:"0ef5fefd-88d7-480f-8a5d-f7e2c8025ae9", APIVersion:"apps/v1", ResourceVersion:"122334057", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
E1116 07:56:12.342407       1 replica_set.go:535] sync "default/app-6769f4cb97" failed with Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

When I check kubectl get pods -n istio-system

NAME                                    READY   STATUS    RESTARTS   AGE
istio-egressgateway-794d6f956b-8p5vz    0/1     Running   5          401d
istio-ingressgateway-784f857457-2fz4v   0/1     Running   5          401d
istiod-67c86464b4-vjp4j                 1/1     Running   5          401d

egress and ingress gateway logs have

2021-11-15T16:55:31.419880Z error   citadelclient   Failed to create certificate: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istiod.istio-system.svc on 169.254.25.10:53: no such host"
2021-11-15T16:55:31.419912Z error   cache   resource:default request:37d26b55-df29-465f-9069-9b9a1904e8ab CSR retrial timed out: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istiod.istio-system.svc on 169.254.25.10:53: no such host"
2021-11-15T16:55:31.419956Z error   cache   resource:default failed to generate secret for proxy: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istiod.istio-system.svc on 169.254.25.10:53: no such host"
2021-11-15T16:55:31.419981Z error   sds resource:default Close connection. Failed to get secret for proxy "router~10.233.70.87~istio-egressgateway-794d6f956b-8p5vz.istio-system~istio-system.svc.cluster.local" from secret cache: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp: lookup istiod.istio-system.svc on 169.254.25.10:53: no such host"
2021-11-15T16:55:31.420070Z info    sds resource:default connection is terminated: rpc error: code = Canceled desc = context canceled
2021-11-15T16:55:31.420336Z warning envoy config    StreamSecrets gRPC config stream closed: 14, connection error: desc = "transport: Error while dialing dial tcp: lookup istiod.istio-system.svc on 169.254.25.10:53: no such host"
2021-11-15T16:55:48.020242Z warning envoy config    StreamAggregatedResources gRPC config stream closed: 14, no healthy upstream
2021-11-15T16:55:48.020479Z warning envoy config    Unable to establish new stream
2021-11-15T16:55:51.025327Z info    sds resource:default new connection
2021-11-15T16:55:51.025597Z info    sds Skipping waiting for gateway secret

Tried to get details as described here, but it shows no resources.
Tried deploying application in non-istio injected namespace and it works without any issue.

We have baremetal cluster running Ubuntu-18.04LTS.

istioctl version

client version: 1.7.0
control plane version: 1.7.0
data plane version: none

Kubernetes v1.18.8

As described here, ran kubectl get --raw /api/v1/namespaces/istio-system/services/https:istiod:https-webhook/proxy/inject -v4

I1116 17:05:32.703339   28777 helpers.go:216] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "the server rejected our request for an unknown reason",
  "reason": "BadRequest",
  "details": {
    "causes": [
      {
        "reason": "UnexpectedServerResponse",
        "message": "no body found"
      }
    ]
  },
  "code": 400
}]
F1116 17:05:32.703515   28777 helpers.go:115] Error from server (BadRequest): the server rejected our request for an unknown reason

From ingres-gateway

istio-proxy@istio-ingressgateway-784f857457-2fz4v:/$ curl https://istiod.istio-system:443/inject -k

curl: (6) Could not resolve host: istiod.istio-system

Edit : in master node /var/lib/kubelet/config.yaml

clusterDNS:
- 169.254.25.10

and we can ping to this IP from our nodes.

I found this in coredns pod logs

E1123 08:57:05.386992       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.233.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
E1123 08:57:05.387108       1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.233.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.233.0.1:443: connect: connection refused
Sachith Muhandiram
  • 2,819
  • 10
  • 45
  • 94

0 Answers0