4

I am working on deploying a certain pod to GKE but I am having an unhealthy state for my backend services.

The deployment went through via helm install process but the ingress reports a certain warning error that says Some backend services are in UNHEALTHY state. I have tried to access the logs but do not know exactly what to look out for. Also, I already have liveness and readiness probes running.

What could I do to make the ingress come back to a healthy state? Thanks

Picture of warning error on GKE UI

Aro
  • 53
  • 3
  • 5
  • 1
    Please put detailed error as "text" not image or link. – Rakesh Gupta Mar 14 '22 at 15:59
  • Does this help? https://stackoverflow.com/a/42971328/2777988 – Rakesh Gupta Mar 14 '22 at 16:01
  • Please provide which GKE version are you using. How did you deployed `Nginx`, can you share your `helm` command? Could you share your `Ingress` yaml, **without** private information like your IP or private `hosts`? Please check if all pods are running correctly `$ kubectl get po -n ` and share in your question. All pods are in `Ready` and in `Running` status? There is information that `Some backend`, not `All backend` which indicates that you have issue with some pods (maybe insufficient CPU, Memory). – PjoterS Mar 15 '22 at 10:46

3 Answers3

2

Without more details it is hard to determine the exact cause.

As first point I want to mention, that your error message is Some backend services are in UNHEALTHY state, not All backend services are in UNHEALTHY state. It indicates that only a few of your backends are affected.

There might be tons of reasons, if you are using GCP Ingress or Nginx Ingress, your configuration of externalTrafficPolicy, if you are using preemptive nodes, your livenessProbe and readinessProbe, health checks, etc.

In your scenario, only a few backends are affected, the only thing with current information I can suggest you some debug options.

  • Using $ kubectl get po -n <namespace> check if all your pods are working correctly, that all containers within pods are Ready and pod status is Running. Eventually check logs of suspicious pod $ kubectl logs <podname> -c <containerName>. In general you should check all pods the load balancer is pointing to,
  • Confirm if livenessProbe and readinessProbe are configured properly and response is 200,
  • Describe your ingress $ kubectl describe ingress <yourIngressName> and check backends,
  • Check if you've configured your health checks properly according to GKE Ingress for HTTP(S) Load Balancing - Health Checks guide.

If you still won't be able to solve this issue with above debug options, please provide more details about your env with logs (without private information).

Useful links:

PjoterS
  • 12,841
  • 1
  • 22
  • 54
0

In GKE you can define BackendConfig. To define custom health checks. you can configure this using the below link to make the ingress backend to be in a HEALTHY state.

https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#direct_health

-2

If you have kubectl access to your pods, you can run kubectl get pod, and then kubctl logs -f <pod-name>. Review the logs and find the error(s).

  • Thanks for the response. What pod exactly? I do not see any ingress pod in my list of pods – Aro Mar 11 '22 at 12:41
  • Backend services are the services behind the ingress; meaning the actual pods that should receive and process the request, not an ingress pod. Please check the logs of the application pods, starting with the pod that should handle incoming requests. – Meir Pechthalt Mar 12 '22 at 12:21