I have a EKS cluster with the aws-alb-ingress-controller controlling the setup of the AWS ALB pointing to the EKS cluster.
After a rolling update of one of the deployments, the application failed, causing the Pod
to never start (The pod is stuck in status CrashLoopBackOff
). However the previous version of the Pod
is still running. But it seems like the status of the service is still unhealthy:
This means now all traffic is redirected to the default backend, a different service. In this case in Kubernetes the related service for the deployment is of type NodePort
:
Type: NodePort
IP: 172.20.186.130
Port: http-service 80/TCP
TargetPort: 5000/TCP
NodePort: http-service 31692/TCP
Endpoints: 10.0.3.55:5000
What is causing the endpoint to become unhealthy? I expected it to just redirect traffic to the old version of the Pod
that is still running. Is there any way were I can ensure that the endpoint remains healthy?