The problem is that after deploying new deployment like
kubectl apply -f deployment.yml
(let's say deployment with one replica)
Kubernetes will create second pod and shutdown the previous one - ok so far.
But immediately after kubectl apply I would like to detect in CI/CD if deployment was successful and in any case (no matter if rollout succeeded or failed) fetch log from one of newly deployed pod in order to provide as much as possible information in CI/CD log to determine what went wrong.
So I'm using
kubectl rollout status deployment deployment-name
which is waiting for deployment to rollout. Immediately after though you will end up with two pods, one in status "Running" and another "Terminating".
Now the problematic part: Normally I would use method like
kubectl get pods --selector=app=deployment-name --output=jsonpath='{.items[*].metadata.name}' --field-selector=status.phase=Running
but unfortunately it will return names of both pods ("Running" and "Terminating") separated with space.
Now I've tried also
kubectl get pods --selector=app=deployment-name --output=jsonpath='{.items[*].metadata.name}' --field-selector=status.phase=Running,status.phase!=Terminating
according to documentation:
but for some reason this will return exactly the same result, both pods running and terminating.
The quesiton is:
How to properly exclude TERMINATING pods from the result?