31

I'm attempting to create a Kubernetes CronJob to run an application every minute.

A prerequisite is that I need to get my application code onto the container that runs within the CronJob. I figure that the best way to do so is to use a persistent volume, a pvclaim, and then defining the volume and mounting it to the container. I've done this successfully with containers running within a Pod, but it appears to be impossible within a CronJob? Here's my attempted configuration:

apiVersion: batch/v2alpha1
kind: CronJob
metadata:
  name: update_db
spec:
  volumes:
  - name: application-code
    persistentVolumeClaim:
      claimName: application-code-pv-claim
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: update-fingerprints
            image: python:3.6.2-slim
            command: ["/bin/bash"]
            args: ["-c", "python /client/test.py"]
          restartPolicy: OnFailure

The corresponding error:

error: error validating "cron-applications.yaml": error validating data: found invalid field volumes for v2alpha1.CronJobSpec; if you choose to ignore these errors, turn validation off with --validate=false

I can't find any resources that show that this is possible. So, if not possible, how does one solve the problem of getting application code into a running CronJob?

theoneandonly2
  • 721
  • 1
  • 9
  • 21

3 Answers3

40

A CronJob uses a PodTemplate as everything else based on Pods and can use Volumes. You placed your Volume specification directly in the CronJobSpec instead of the PodSpec, use it like this:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: update-db
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: update-fingerprints
            image: python:3.6.2-slim
            command: ["/bin/bash"]
            args: ["-c", "python /client/test.py"]
            volumeMounts:
            - name: application-code
              mountPath: /where/ever
          restartPolicy: OnFailure
          volumes:
          - name: application-code
            persistentVolumeClaim:
              claimName: application-code-pv-claim
Simon Tesar
  • 1,683
  • 13
  • 15
  • 1
    I can't edit it but the yaml is invalid : - under `containers:` : `image:` and the following key must be at the same level than `name:`, except `mountPath`) - under `volumes:` : `persistentVolumeClaim` and `claimName` must be at the same level than `name:` – Nicolas Pepinster Jan 25 '19 at 13:51
  • I think you're right, @NicolasPepinster, thanks for that. – Simon Tesar Jan 29 '19 at 07:15
  • Hi, if the `claimName` is at the same indentation level than `persistenceVolumeClaim`, hte following error occurs: "invalid type for io.k8s.api.core.v1.PersistentVolumeClaimVolumeSource: got "string", expected "map"". So I think `claimName` should be indented to the right. – l.cotonea Feb 15 '21 at 20:53
  • @l.cotonea obviously :-) Funny how that took two years. – Simon Tesar Feb 16 '21 at 07:34
  • I updated the example to Kubernetes 1.20 and fixed all two/four space indentation errors. – Simon Tesar Feb 16 '21 at 07:49
3

For the other question in there: "how does one solve the problem of getting application code into a running CronJob?"

You build your own image that contains the code. This is how it is normally done.

FROM python:3.6.2-slim
ADD test.py /client/test.py

CMD ['python','-c','/client/test.py']

Build and push to the docker registry.

docker build -t myorg/updatefingerprints
docker push myorg/updatefingerprints

Use this image in the descriptor.

apiVersion: batch/v2alpha1
kind: CronJob
metadata:
  name: update_db
spec:
  volumes:
  - name: application-code
    persistentVolumeClaim:
      claimName: application-code-pv-claim
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: update-fingerprints
            image: myorg/update-fingerprints
            imagePullPolicy: Always
          restartPolicy: OnFailure

This requires thinking quite differently about configuration management and version control.

Gudlaugur Egilsson
  • 2,420
  • 2
  • 24
  • 23
1

now there is ephemeral. available since 1.21

https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes

kind: Pod
apiVersion: v1
metadata:
  name: my-app
spec:
  containers:
    - name: my-frontend
      image: busybox:1.28
      volumeMounts:
      - mountPath: "/scratch"
        name: scratch-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: scratch-volume
      ephemeral:
        volumeClaimTemplate:
          spec:
            accessModes: [ "ReadWriteOnce" ]
            resources:
              requests:
                storage: 1Gi
  • I'm seeing `storageclass.storage.k8s.io "scratch-storage-class" not found` I must be doing something wrong. Do I need to create storage class first? – Teebu Jul 19 '22 at 00:46
  • storageclass is optional parameter. if not specified, default will be used https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/ – Alexey Gavrilov Jul 21 '22 at 13:37
  • I'm confused how do I create this ephemeral storage class? How do I use it? It's not available in my definitions and not standard in azure. – Teebu Jul 21 '22 at 18:21
  • CronJob .spec.jobTemplate.spec.template.volumes[ephemeral] – Alexey Gavrilov Jul 29 '22 at 15:56
  • Ok that makes sense now, I was using a storage class that didn't exist and ephemeral can use any storage class that is available or default if omitted. My issue now is the Job shows a warning for the PVC after it completes. Lens showing Warning PVC not found after successful completion of the job. – Teebu Jul 30 '22 at 17:48
  • 1
    Yes, that is the meaning of ephemeral. A disk is created for the duration of the work and deleted after use. With this approach, kube nodes have a small root disk. – Alexey Gavrilov Aug 01 '22 at 09:06
  • After the conrjob finishes the warning says the PVC says its deleted, only when another cornjob is ran the previous one is deleted. I can still see the pv and the pvc listed even after the job is finished. – Teebu Aug 01 '22 at 20:38