2

As part of my applications helm chart I have a Job which runs database migrations. I’ve annotated the job with the hook "helm.sh/hook": pre-install,pre-upgrade to ensure migrations are ran before the application is deployed. I want to use the same service account and config-map that my application deployment uses, however these resources have not been created at the time the job is executed resulting in the following error:

Warning FailedCreate 8s job-controller Error creating: pods "db-migrate-" is forbidden: error looking up service account dev-platform/platform: serviceaccount "platform" not found

According to the helm installation order the service account and config map should be created before the job. Is the behaviour nullified when running the job as pre-install?:

apiVersion: batch/v1
kind: Job
metadata:
  namespace: dev-platform
  name: db-migrate
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  activeDeadlineSeconds: 300
  backoffLimit: 1
  template:
    spec:
      # Share platform service account IAM role.
      serviceAccountName: {{ .Release.Name }}
      securityContext:
        fsGroup: 65534 #  Allow read permissions of AWS token files for IAM service account token.
      restartPolicy: Never
      containers:
        - name: db-migrate
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          envFrom:
            - configMapRef:
                name: platform-config
            - secretRef:
                name: platform-secrets
          # Overwrite APP_COMMAND variable.
          env:
            - name: APP_COMMAND
              value: migrate
            - name: APP_ENVIRONMENT
              value: {{ .Values.image.appEnvironment | quote }}
James Downing
  • 69
  • 1
  • 8

3 Answers3

8

Helm hooks are not limited to Jobs.

You can create the serviceaccount and configmap in the pre-install phase itself using the same helm-hook annotation as the Job.

Note: If you need the serviceaccount and configmap available after the pre-install phase, do not set the 'helm.sh/hook-delete-policy' to 'hook-succeeded'.

Example:

apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: my-sa    
    annotations:
      helm.sh/hook: pre-install
      helm.sh/hook-delete-policy: before-hook-creation
      helm.sh/hook-weight: "-10"
Dhanuj Dharmarajan
  • 436
  • 1
  • 3
  • 10
  • This is exactly the approach I ended up implementing. However could you clarify not setting the `hook-delete-policy`. Coincidentally I have `"helm.sh/hook-delete-policy": hook-succeeded` set for the confimap and secrets file required for the job and it seems to work, I just want the configmap and secrets availble for the job and once the job is complete they can be removed. – James Downing Sep 04 '20 at 10:16
  • Yes if the configmap and serviceaccount are not used after the pre-install phase, you can set the delete-policy to hook-succeeded. It will be available for your job, but not afterwards. – Dhanuj Dharmarajan Sep 04 '20 at 12:37
2

Yes, From the official docs, a pre-install hook:

Executes after templates are rendered, but before any resources are created in Kubernetes (docs)

I would suggest to have the migrations as an init-container to your main app's pod. That way, reusing the existing configmaps and service accounts becomes trivial. Init-Containers need to run to completion before the pods containers are started. That way, you can also make sure that the database is migrated prior to starting your app. See here for the official documentation on init-containers.

winston
  • 511
  • 4
  • 9
  • Thanks @winston. I was straying away from this approach due to the fact that every pod being deployed runs the Init-container. Wouldn't this cause issues particularly when executing database migrations if more than one pod is being created at the same time? If I do continue down the pre-install hook route It looks like I'll have to pass the database configuration in a different approach (Maybe creating the jobs own config-map pulled from the same values file as the application) – James Downing Aug 13 '20 at 08:22
  • Hey @JamesDowning! It all comes down to the app you're deploying. Ideally, the container running the migrations should lock the database while doing so, while the other replicas of the pod wait until the migrations are done. This however requires a particularly robust app which isn't always feasable. An alternative approach to init-jobs would be to also deploy a plain old Job with helm that runs the migrations and have init-jobs that check if said Migration job is already finished, somewhat like this: https://stackoverflow.com/questions/44686568/tell-when-job-is-complete – winston Aug 13 '20 at 08:50
0

When I'm run ServiceAccount creation using Helm hooks as in the previous example getting this error (confusing a little):

Error: ServiceAccount "demo-33-service-account" is invalid: metadata.labels: Invalid value: "-10": a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue', or 'my_value', or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')

It's because of helm.sh/hook-weight: "-10".

I'm also tried to use only positive weights (0 for ServiceAccount and 10 for Job), but with no luck - Job started before ServiceAccount created and stucked. When I manually remove the stucked Job, ServiceAccount created by hook successfully. Think it is because hooks executes after templates are rendered, but before any resources are created in Kubernetes (see pre-install in docs).

Maybe post-install hook for Job will rescue, but I solved the issue using initContainers for my migration job (thanks @winston).