1

I am using a replica set to manage my pods. If my app crashes after starting I want a replica set to restart it. But I have a scenario that the replica set that should not restart my app or restart it by limit.

When my app starts successfully it will return an OK result for endpoint /health. I have this scenario: I will push some changes so that app will not be started successfully, it will be crashed on startup. And it will not return an OK result for endpoint /health. When changes are applied to Kubernetes it doesn't make sense to restart it from the replica set, because it will always fail. I know that the replica set restart policy is always. But is there any way to make it work? I am expecting: When the app starts successfully (returns an OK result for endpoint /health), the replica set should always restart if it will be crashed during the runtime.

When the app doesn't start successfully (does not return an OK result for endpoint /health), the replica set shouldn't restart it more than 3 times, because it doesn't make sense. Instead of that it should keep old version of app.

My deployment file is basic:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: apideployment
  namespace: dilshod
spec:
  selector:
    matchLabels:
      app: api_deploymentpod
  template:
    metadata:
      labels:
        app: api_deploymentpod
    spec:
      containers:
      - image: komdil/app:1.0.19
        imagePullPolicy: Always
        name: appcontainer
        ports:
        - containerPort: 80
Dilshod K
  • 2,924
  • 1
  • 13
  • 46

1 Answers1

0

Unfortunately the Kubernetes ReplicaSet doesn't allow you to do what you ask; this will always try to bring the replicas of your Pod to the desired state (with a timing that increases incrementally from time to time).

You can specify how many Pods should run concurrently by setting .spec.replicas. The ReplicaSet will create/delete its Pods to match this number.

If you do not specify .spec.replicas, then it defaults to 1.

https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/#replicas

The spec of a Pod has a restartPolicy field with possible values Always, OnFailure, and Never. The default value is Always.

The restartPolicy applies to all containers in the Pod. restartPolicy only refers to restarts of the containers by the kubelet on the same node. After containers in a Pod exit, the kubelet restarts them with an exponential back-off delay (10s, 20s, 40s, …), that is capped at five minutes. Once a container has executed for 10 minutes without any problems, the kubelet resets the restart backoff timer for that container.

https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy

It isn't clear to me why you would want to block this mechanism if the Pod (for whatever reason) doesn't start correctly... But if you still want to achieve your goal, you must implement a Monitoring system that checks the status of the Pod + how many reboots it does (perhaps in a given time frame) and trigger a workflow (Jenkins, GitHub, etc.) that scales your Deployment to 0.

https://kubernetes.io/docs/reference/kubectl/cheatsheet/#scaling-resources

https://stackoverflow.com/a/51245203/21404450

glv
  • 994
  • 1
  • 1
  • 15