238

How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?


I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2.

So what (I think) I'd like to do is a "rolling restart" of the deployment resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?

Highway of Life
  • 22,803
  • 16
  • 52
  • 80
Johan
  • 37,479
  • 32
  • 149
  • 237
  • 5
    `$ kubectl set env deployment my deployment --env="LAST_RESTART=$(date)" --namespace ...` do the job for me – maciek Mar 29 '19 at 14:58

12 Answers12

195

The current best solution to this problem (referenced deep in https://github.com/kubernetes/kubernetes/issues/22368 linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.

When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.

Not quite as quick as just editing the ConfigMap in place, but much safer.

Symmetric
  • 4,495
  • 5
  • 32
  • 50
  • 2
    This is the approach we've taken as well – Johan Nov 16 '16 at 06:50
  • 15
    Worth mentioning that the new experimental tool `kustomize` supports automatically creating a deterministic configmap hash, meaning you don't need to manually create a new configmap: https://github.com/kubernetes-sigs/kustomize/blob/12d1771bb349e1523bc546e314da63c684a7faf2/examples/configGeneration.md#L5 – Symmetric Aug 13 '18 at 16:49
  • 1
    This is what Spinnaker does behind the scenes, so if you use it, you wouldn't have to worry about this. – Gus Jun 28 '20 at 11:37
  • and how do we do that ? – Stargateur Nov 25 '20 at 11:45
  • good approach but need to handle deletions of old config maps :( – Alok Kumar Singh Dec 28 '20 at 07:19
  • @AlokKumarSingh I still don't know of a clean way to handle cleaning up orphaned ConfigMaps. In general deleting orphaned resources is something that's not very well supported in k8s; I don't know of a "define every resource that should be in this namespace" API call. You can try ownerReferences (https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) but I don't see a good way to apply that to ConfigMaps. I have been experimenting with the Argo deployment manager and I believe it will cover this case -- it handles deleting resources that are no longer in your manifests. – Symmetric Dec 28 '20 at 18:04
  • Much cleaner to do this way https://stackoverflow.com/a/51421527/4106031 Chose this answer. – Alok Kumar Singh Dec 29 '20 at 05:59
  • 1
    Label Selectors are immutable, ended up using this and doing the hard work of cleaning the config maps by following conventions on the name, https://stackoverflow.com/questions/37317003/restart-pods-when-configmap-updates-in-kubernetes/51421527#51421527 – Alok Kumar Singh Dec 29 '20 at 15:19
  • @Symmetric hey adding to your answer on the delete problem https://stackoverflow.com/a/65497844/4106031 – Alok Kumar Singh Dec 29 '20 at 19:45
95

Signalling a pod on config map update is a feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).

You can always write a custom pid1 that notices the confimap has changed and restarts your app.

You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.

Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.

Prashanth B
  • 4,833
  • 1
  • 19
  • 13
  • With "deleting pods" you mean: Collecting all pod names, delete one, wait until replaced, delete second one, wait until replaced etc. Correct? – Torsten Bronger Oct 13 '16 at 17:27
  • 16
    using a deployment I would scale it down and then up. You will still have that small amount of down time though. You can do it in one line to reduce that... `kubectl scale deployment/update-demo --replicas=0; kubectl scale deployment/update-demo --replicas=4;` – Nick H Oct 25 '16 at 02:50
  • If you don't want to find all the pods, and don't care about downtime - just remove the RC and then re-create the RC. – Drew Oct 26 '16 at 21:58
  • 1
    Does this mean the volume it’s mounted on is updated and you just need to re-read the file on the pod without restarting the whole pod? – Matt Williamson Apr 13 '18 at 11:54
  • @NickH Quick and dirty, fortunately the downtime was acceptable in my case and this worked great, thanks! – ChocolateAndCheese Apr 19 '18 at 21:02
  • 1
    To avoid downtime, can we scale it up, say from one replica to 2 and then kill the older instance? Would this approach work? – xbmono Oct 07 '20 at 22:16
  • @xbmono Only if you kill the second one afterwards. Otherwise it would only have an instance with the old secret and a new one with the new secret. Ugly mess. If you want to restart all pods then you have to scale first down and then up as NickH mentioned above. – mbaldi May 03 '21 at 14:39
  • 3
    Easier to run `kubectl rollout restart deployment/my-deploy` and k8s will manage a rolling restart similar to how updated deploys work. This also works with DaemonSets and StatefulSets – nijave Jun 08 '22 at 16:51
  • Yeah a rollout restart is definitely the way to go, it means you maintain HA (rather than scale down then up) and all pods will get new config map / secrets – Oliver Feb 23 '23 at 23:32
80

The best way I've found to do it is run Reloader

It allows you to define configmaps or secrets to watch, when they get updated, a rolling update of your deployment is performed. Here's an example:

You have a deployment foo and a ConfigMap called foo-configmap. You want to roll the pods of the deployment every time the configmap is changed. You need to run Reloader with:

kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

Then specify this annotation in your deployment:

kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: "foo-configmap"
  name: foo
...
Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
George Miller
  • 933
  • 7
  • 6
  • 1
    Reloader is compatible with kubernetes >= 1.9 – jacktrade Jul 20 '20 at 11:15
  • But I don't want to roll the pods of the deployment every time the configmap is changed, I want the configmap volume files change silently, not restarting a single pod. – Steve Wu Feb 11 '22 at 07:57
  • 2
    @SteveWu that already happens https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically but if your running application needs to be signalled or restarted to pick up the updated files, that's what this question is about. – jbg May 09 '22 at 06:45
59

Helm 3 doc page

Often times configmaps or secrets are injected as configuration files in containers. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.

The sha256sum function can be used together with the include function to ensure a deployments template section is updated if another spec changes:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]

In my case, for some reasons, $.Template.BasePath didn't work but $.Chart.Name does:

spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: admin-app
      annotations:
        checksum/config: {{ include (print $.Chart.Name "/templates/" $.Chart.Name "-configmap.yaml") . | sha256sum }}
RichVel
  • 7,030
  • 6
  • 32
  • 48
quanta
  • 3,960
  • 4
  • 40
  • 75
  • 22
    Not applicable to general Kubernetes usage, only applicable to Helm – Emii Khaos Apr 16 '18 at 14:25
  • 4
    The answer is helpful but probably not relevant to this question – Anand Singh Kunwar Jul 04 '18 at 13:50
  • 1
    `helm` 3 was released recently. Thus, the link is outdated. It points to `master` branch. The following URL will lead to (currently) latest `helm` 2 docs: https://github.com/helm/helm/blob/release-2.16/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change – Marcel Hoyer Nov 27 '19 at 17:03
  • Cool solution. I changed to sha1sum, as in my case sha256sum had 65 characters which resulted in `Deployment.apps "xxx" is invalid: metadata.labels: Invalid value: "xxx": must be no more than 63 characters`. Alternative would be `| trunc 63`, but sha1sum should be "more unique". – iptizer Feb 03 '20 at 16:58
  • Link for helm v3: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments – Radek Liska Apr 29 '21 at 09:14
  • This will not work unless you apply the changes using HELM/ kubectl. – Vishrant Sep 28 '22 at 04:41
22

For k8s > 1.15, doing a rollout restart worked best for me as part of CI/CD with App configuration path hooked up with a volume-mount. A reloader plugin or setting restartPolicy: Always in deployment manifest YML did not work for me. No application code changes needed, worked for both static assets as well as Microservices.

kubectl rollout restart deployment/<deploymentName> -n <namespace> 
Samveen
  • 3,482
  • 35
  • 52
Vinay
  • 954
  • 8
  • 13
16

You can update a metadata annotation that is not relevant for your deployment. it will trigger a rolling-update

for example:

    spec:
      template:
        metadata:
          annotations:
            configmap-version: 1
Symmetric
  • 4,495
  • 5
  • 32
  • 50
Maoz Zadok
  • 4,871
  • 3
  • 33
  • 43
  • I looking docs about metadata: labels: configmap-version: 1 – c4f4t0r Aug 17 '18 at 11:12
  • 14
    metadata label changes do not trigger a restart of the pods – dan carter Jan 24 '19 at 21:47
  • 2
    This answer has upwotes so I need to ask. If we update the metadata, will Kubernetes cluster trigger a rolling update? @maoz-zadok – titus Jan 08 '20 at 19:46
  • 3
    I believe this works as long as the metadata label is under `template.spec` – Saikiran Yerram May 05 '20 at 05:04
  • 3
    Confirmed using labels under `spec.template.metadata.labels` works! (have edited the answer it's under review). Really elegant way to do this :+1 – Alok Kumar Singh Dec 28 '20 at 07:25
  • I am getting this error `MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable` when i try to update the selector label, and template metadata label needs to be same as selector label :( this won't work. https://github.com/kubernetes/client-go/issues/508 – Alok Kumar Singh Dec 29 '20 at 14:12
  • This can work only when you add the key in the labels and not update it. – Alok Kumar Singh Dec 29 '20 at 19:00
  • 2
    I recommend using an annotation instead of a label, for this approach, since you can freely update annotations, and labels can't be mutated. Or in more recent versions of kubectl can simply call `kubectl rollout restart deployment/mydeployname` to trigger a new rollout of the same config. https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-restart-em- – Symmetric Dec 29 '20 at 21:20
10

Had this problem where the Deployment was in a sub-chart and the values controlling it were in the parent chart's values file. This is what we used to trigger restart:

spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ tpl (toYaml .Values) . | sha256sum }}

Obviously this will trigger restart on any value change but it works for our situation. What was originally in the child chart would only work if the config.yaml in the child chart itself changed:

    checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
Bryji
  • 1,206
  • 1
  • 13
  • 19
4

Consider using kustomize (or kubectl apply -k) and then leveraging it's powerful configMapGenerator feature. For example, from: https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/configmapgenerator/

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

# Just one example of many...
- name: my-app-config
  literals:
  - JAVA_HOME=/opt/java/jdk
  - JAVA_TOOL_OPTIONS=-agentlib:hprof

  # Explanation below...
  - SECRETS_VERSION=1

Then simply reference my-app-config in your deployments. When building with kustomize, it'll automatically find and update references to my-app-config with an updated suffix, e.g. my-app-config-f7mm6mhf59.

Bonus, updating secrets: I also use this technique for forcing a reload of secrets (since they're affected in the same way). While I personally manage my secrets completely separately (using Mozilla sops), you can bundle a config map alongside your secrets, so for example in your deployment:

# ...
spec:
  template:
    spec:
      containers:
        - name: my-app
          image: my-app:tag
          envFrom:
            # For any NON-secret environment variables. Name is automatically updated by Kustomize
            - configMapRef:
                name: my-app-config

            # Defined separately OUTSIDE of Kustomize. Just modify SECRETS_VERSION=[number] in the my-app-config ConfigMap
            # to trigger an update in both the config as well as the secrets (since the pod will get restarted).
            - secretRef:
                name: my-app-secrets

Then, just add a variable like SECRETS_VERSION into your ConfigMap like I did above. Then, each time you change my-app-secrets, just increment the value of SECRETS_VERSION, which serves no other purpose except to trigger a change in the kustomize'd ConfigMap name, which should also result in a restart of your pod. So then it becomes:

patricknelson
  • 977
  • 10
  • 16
2

I also banged my head around this problem for some time and wished to solve this in an elegant but quick way.

Here are my 20 cents:

  • The answer using labels as mentioned here won't work if you are updating labels. But would work if you always add labels. More details here.

  • The answer mentioned here is the most elegant way to do this quickly according to me but had the problem of handling deletes. I am adding on to this answer:

Solution

I am doing this in one of the Kubernetes Operator where only a single task is performed in one reconcilation loop.

  • Compute the hash of the config map data. Say it comes as v2.
  • Create ConfigMap cm-v2 having labels: version: v2 and product: prime if it does not exist and RETURN. If it exists GO BELOW.
  • Find all the Deployments which have the label product: prime but do not have version: v2, If such deployments are found, DELETE them and RETURN. ELSE GO BELOW.
  • Delete all ConfigMap which has the label product: prime but does not have version: v2 ELSE GO BELOW.
  • Create Deployment deployment-v2 with labels product: prime and version: v2 and having config map attached as cm-v2 and RETURN, ELSE Do nothing.

That's it! It looks long, but this could be the fastest implementation and is in principle with treating infrastructure as Cattle (immutability).

Also, the above solution works when your Kubernetes Deployment has Recreate update strategy. Logic may require little tweaks for other scenarios.

Alok Kumar Singh
  • 2,331
  • 3
  • 18
  • 37
2

Native Option without Third party

Kubernetes auto-reload the config map if it's mounted as volume (If subpath there it won't work with that).

Example : https://medium.com/@harsh.manvar111/update-configmap-without-restarting-pod-56801dce3388

Third-party Option

How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?

If you are using configmap as Environment you have to use the external option.

When a ConfigMap currently consumed in a volume is updated, projected keys are eventually updated as well. The kubelet checks whether the mounted ConfigMap is fresh on every periodic sync. However, the kubelet uses its local cache for getting the current value of the ConfigMap. The type of the cache is configurable using the ConfigMapAndSecretChangeDetectionStrategy field in the KubeletConfiguration struct. A ConfigMap can be either propagated by watch (default), ttl-based, or by redirecting all requests directly to the API server. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the Pod can be as long as the kubelet sync period + cache propagation delay, where the cache propagation delay depends on the chosen cache type (it equals to watch propagation delay, ttl of cache, or zero correspondingly).

Official document : https://kubernetes.io/docs/concepts/configuration/configmap/#mounted-configmaps-are-updated-automatically

ConfigMaps consumed as environment variables are not updated automatically and require a pod restart.

Simple example Configmap

apiVersion: v1
kind: ConfigMap
metadata:
  name: config
  namespace: default
data:
  foo: bar

POD config

spec:
  containers:
  - name: configmaptestapp
    image: <Image>
    volumeMounts:
    - mountPath: /config
      name: configmap-data-volume
    ports:
    - containerPort: 8080
  volumes:
    - name: configmap-data-volume
      configMap:
        name: config
Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102
0

Adding the immutable property to the config map totally avoids the problem. Using config hashing helps in a seamless rolling update but it does not help in a rollback. You can take a look at this open-source project - 'Configurator' - https://github.com/gopaddle-io/configurator.git .'Configurator' works by the following using the custom resources :

  1. Configurator ties the deployment lifecycle with the configMap. When the config map is updated, a new version is created for that configMap. All the deployments that were attached to the configMap get a rolling update with the latest configMap version tied to it.

  2. When you roll back the deployment to an older version, it bounces to configMap version it had before doing the rolling update.

This way you can maintain versions to the config map and facilitate rolling and rollback to your deployment along with the config map.

ashvin
  • 1
  • 1
-2

Another way is to stick it into the command section of the Deployment:

...
command: [ "echo", "
  option = value\n
  other_option = value\n
" ]
...

Alternatively, to make it more ConfigMap-like, use an additional Deployment that will just host that config in the command section and execute kubectl create on it while adding an unique 'version' to its name (like calculating a hash of the content) and modifying all the deployments that use that config:

...
command: [ "/usr/sbin/kubectl-apply-config.sh", "
  option = value\n
  other_option = value\n
" ]
...

I'll probably post kubectl-apply-config.sh if it ends up working.

(don't do that; it looks too bad)

Velkan
  • 7,067
  • 6
  • 43
  • 87
  • OP wants to know how to update pods when configmap updates have been made. This only states an alternative way to get data into a pod. Not to mention, this technique isn't recommended. Its much better to track configurations in a configmap than pass values via command. – phbits Jan 24 '22 at 16:17
  • @phbits well, if exactly that has become possible after half a decade then great. Otherwise pick your workaround. – Velkan Jan 24 '22 at 16:55