I would like to know if there is a possibility to apply liveness and readiness probe check to multiples containers in a pod or just for one container in a pod. I did try checking with multiple containers but the probe check fails for container A and passes for container B in a pod.
-
please provide a YAML example of what you're trying so that the community can be more helpful in understanding the problem and suggesting a solution :) – Ostap Jun 21 '21 at 12:16
3 Answers
Welcome to the community.
Answer
It's absolutely possible to apply multiple probes for containers within the pod. What happens next depends on a probe.
There are three probes listed in Containers probes which can be used: liveness
, readiness
and startup
. I'll describe more about liveness
and readiness
:
Liveness
livenessProbe
: Indicates whether the container is running. If theliveness
probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a Container does not provide aliveness
probe, the default state is Success
The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.
In case of livenessProbe
fails, kubelet
will restart the container in POD, the POD will remain the same (its age as well).
Also it can be checked in container events
, this quote is from Kubernetes in Action - Marko Lukša
I’ve seen this on many occasions and users were confused why their container was being restarted. But if they’d used
kubectl describe
, they’d have seen that the container terminated with exit code 137 or 143, telling them that the pod was terminated externally
Readiness
readinessProbe
: Indicates whether the container is ready to respond to requests. If thereadiness
probe fails, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod. The default state ofreadiness
before the initial delay is Failure. If a Container does not provide areadiness
probe, the default state is Success
The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.
What happens here is kubernetes checks if webserver in container is serving requests and if not, readinessProbe
fails and POD's IP (generally speaking entire POD) will be removed from endpoints and no traffic will be directed to the POD.
Useful links
- Container probes - general information and
types
- Configure Liveness, Readiness and Startup Probes (practice and examples)

- 3,661
- 2
- 10
- 25
As per K8S spec, liveness and readiness check can be executed for every container and carries its own template, which is nested into the specific container. See for example : https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/probe/exec-liveness.yaml .
So I think it really depends on what are you checking for in the probe and how container A could answer in a different fashion than container B.
If you have a need for templating, you should look into kustomize

- 300
- 1
- 6
Yes it is possible, I have tried this. Here's what I tried.
- One deployment with 2 replica.
- Each replica pod with 4 containers.
- Each container with it's own liveness probe.
- Liveness probe used
http-get
to check container application health.
Few things to take care:
- Since
<PODIP>:<CONTAINERPORT>/<ENDPOINT>
is used by liveness probe to make http request, one must make sure<CONTAINERPORT>
is different for each container. Else the pod will go toCrashLoopBack
.
Example:
containers:
- name: container1
...
args:
- --leader-election=true
- --http-endpoint=:8080
...
ports:
- containerPort: 8080
name: http-endpoint
protocol: TCP
...
livenessProbe:
failureThreshold: 1
httpGet:
path: /healthz/leader-election
port: http-endpoint
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
...
- name: container2
...
args:
- --leader-election=true
- --http-endpoint=:8081
...
ports:
- containerPort: 8081
name: http-endpoint
protocol: TCP
...
livenessProbe:
failureThreshold: 1
httpGet:
path: /healthz/leader-election
port: http-endpoint
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
...
Suggestion:
If each container is a separate application and do no depend on each other and is important enough that you need a liveness probe for it then, it should be better to deploy them in separate pods.

- 121
- 1
- 6