3

I wanted to know is it possible to have a job in Kubernetes that will run every hour, and will delete certain pods. I need this as a temporary stop gap to fix an issue.

Rico
  • 58,485
  • 12
  • 111
  • 141
user1555190
  • 2,803
  • 8
  • 47
  • 80

3 Answers3

1

Use a CronJob (1, 2) to run the Job every hour.

K8S API can be accessed from Pod (3) with proper permissions. When a Pod is created a default ServiceAccount is assigned to it (4) by default. The default ServiceAccount has no RoleBinding and hence the default ServiceAccount and also the Pod has no permissions to invoke the API.

If a role (with permissions) is created and mapped to the default ServiceAccount, then all the Pods by default will get those permissions. So, it's better to create a new ServiceAccount instead of modifying the default ServiceAccount.

So, here are steps for RBAC (5)

  • Create a ServiceAccount
  • Create a Role with proper permissions (deleting pods)
  • Map the ServiceAccount with the Role using RoleBinding
  • Use the above ServiceAccount in the Pod definition
  • Create a pod/container with the code/commands to delete the pods

I know it's a bit confusing, but that's the way K8S works.

Praveen Sripati
  • 32,799
  • 16
  • 80
  • 117
1

Yes, it's possible.

I think the easiest way is just to call the Kubernernes API directly from a job. Considering RBAC is configured, something like this:

apiVersion: batch/v1
kind: Job
metadata:
  name: cleanup
spec:
  serviceAccountName: service-account-that-has-access-to-api
  template:
    spec:
      containers:
      - name: cleanup
        image: image-that-has-curl
        command:
        - curl
        - -ik 
        - -X
        - DELETE
        - -H
        - "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
        - https://kubernetes.default.svc.cluster.local/api/v1/namespaces/{namespace}/pods/{name}
      restartPolicy: Never
  backoffLimit: 4

You can also run a kubectl proxy sidecar to connect to the cluster using localhost. More information here

Or even running plain kubectl in a pod is also an option: Kubernetes - How to run kubectl commands inside a container?

Rico
  • 58,485
  • 12
  • 111
  • 141
-2

There is another workaround possibly.

You could create a liveness probe (super easy if you have none already) that doesn't run until after one hour and always fail.

livenessProbe:
  tcpSocket:
    port: 1234
  initialDelaySeconds: 3600

This will wait 3600 seconds (1 hour) and then try to connect to port 1234 and if that fails it will kill the container (not the pod!).

Andreas Wederbrand
  • 38,065
  • 11
  • 68
  • 78
  • If the command returns a non-zero value, the kubelet kills the Container and restarts it. - https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-tcp-liveness-probe - So, we are back to square one. – Praveen Sripati Oct 11 '18 at 16:31
  • Yes, this is what is supposed to happen when a pod is deleted. Or is it not a part of a replicaset/deployment? – Andreas Wederbrand Oct 11 '18 at 19:47
  • but then the pod would be useable or the app inside the pod be useable in the mean time? web application serve traffic – user1555190 Oct 11 '18 at 20:21
  • The OP was about Pod deletion. But it's not getting deleted in this solution. – Praveen Sripati Oct 11 '18 at 23:50
  • @user1555190, no. As soon as it gets deleted it is take out of any service that wraps it. Once the container has been restarted it will be available in the service. If the purpose is for the pod to be unavailable this won't work. If the purpose is for the app to restart this will work perfekt. – Andreas Wederbrand Oct 12 '18 at 07:25