44

I have a running pod and I want to change one of it's container's environment variable and made it work immediately. Can I achieve that? If I can, how to do that?

Lucy Panda
  • 455
  • 1
  • 4
  • 4

7 Answers7

21

Simply put and in kube terms, you can not.

Environment for linux process is established on process startup, and there are certainly no kube tools that can achieve such goal. For example, if you make a change to your Deployment (I assume you use it to create pods) it will roll the underlying pods.

Now, that said, there is a really hacky solution reported under Is there a way to change the environment variables of another process in Unix? that involves using GDB

Also, remember that even if you could do that, there is still application logic that would need to watch for such changes instead of, as it usually is now, just evaluate configuration from envs during startup.

Radek 'Goblin' Pieczonka
  • 21,554
  • 7
  • 52
  • 48
  • @radek-goblin-pieczonka I'm having a similar issue. I would want to do it through my deployment, however I am using the Kubernetes secrets to set my env. When a value of an env var changes, this does not change my deployment (the value is not stored in the deployment). So I can not simply apply a new deployment because K8S will not see any changes... – Christian Vermeulen Feb 07 '19 at 16:15
  • if you use a templating tool like helm, you can store the checksum of the secret in pod template annotations. That will cause deployment cjhange on secret change (mind that only if the change originates within helm, and not if it's changed outside) – Radek 'Goblin' Pieczonka Aug 29 '23 at 07:52
19

This worked with me

kubectl set env RESOURCE/NAME KEY_1=VAL_1 ... KEY_N=VAL_N

check the official documentation here

Another approach for runtime pods you can get into the Pod command line and change the variables in the runtime RUN kubectl exec -it <pod_name> -- /bin/bash Then Run export VAR1=VAL1 && export VAR2=VAL2 && your_cmd

Ahmed Badawy
  • 421
  • 4
  • 13
  • running pod env vars can not be modified with kubectl: "Forbidden: pod updates may not change fields other than..." – Zoltan Feb 10 '21 at 11:47
  • 1
    It works only for updating pod templates which are part of a deployment, replicaset, etc (this is actually mentioned in the documentation link that you posted). It doesn't work for a running pod – victorm1710 Jul 22 '21 at 11:00
  • @victorm1710 Actually you can get the Pod command line and change the `kubectl exec -it -- /bin/bash` and run `"export VAR1=VAL1 && export VAR2=VAL2 && your_cmd"` – Ahmed Badawy Jul 27 '21 at 09:12
  • @AhmedBadawy, you're right, this approach will work for a running pod, however that's not an approach that you've suggested in your answer (using `kubectl set env ...`). – victorm1710 Jul 27 '21 at 09:54
  • 1
    @victorm1710 I will update my answer with the new approach. Thanks for mention – Ahmed Badawy Jul 27 '21 at 10:00
9

I'm not aware of any way to do it and I can't think of a real world scenario where this makes too much sense.

Usually you have to restart a process for it to notice the changed environment variables and the easiest way to do that is restarting the pod.

The solution closest to what you seem to want is to create a deployment and then use kubectl edit (kubectl edit deploy/name) to modify it's environment variables. A new pod is started and the old one is terminated after you save.

tback
  • 11,138
  • 7
  • 47
  • 71
  • a scenario where this makes sense is when a container may have been deployed with potentially incorrect configuration values (or they were correct at one point), or they are correct and not appearing to work for some reason. And you do have access to the container console but not to the deployment pipeline to redeploy it or change the deployment settings. And you have offered or been tasked to figure out what is wrong with the containers. This is where I was at today at about 4:30 PM. – StingyJack Oct 15 '21 at 03:43
  • In perfect world pod restarts are not a problem. But in Kubernetes life is not always this simple. Kubernetes often finds good and bad reasons to not be able to restart the pod. Or sometimes you are forced to use a deployment strategy that causes downtime. – Tosh Nov 25 '21 at 17:36
2

Kubernetes is designed in such a way that any changes to the pod should be redeployed through the config. If you go messing with pods that have already been deployed you can end up with weird clusters that are hard to debug.

If you really want to you can run additional commands in your running pod using kubectl exec, but this is only recommended for debug purposes.

kubectl exec -it <pod_name> export VARIABLENAME=<thing>
Lindsay Landry
  • 4,657
  • 1
  • 15
  • 19
1

If you are using Helm 3> according to the documentation:

Automatically Roll Deployments

Often times ConfigMaps or Secrets are injected as configuration files in containers or there are other external dependencies changes that require rolling pods. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.

The sha256sum function can be used to ensure a deployment's annotation section is updated if another file changes:

kind: Deployment 
spec: 
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} 
[...] 

In the event you always want to roll your deployment, you can use a similar annotation step as above, instead replacing with a random string so it always changes and causes the deployment to roll:

kind: Deployment 
spec:   
  template:
    metadata:
      annotations:
        rollme: {{ randAlphaNum 5 | quote }} 
[...] 

Both of these methods allow your Deployment to leverage the built in update strategy logic to avoid taking downtime.

NOTE: In the past we recommended using the --recreate-pods flag as another option. This flag has been marked as deprecated in Helm 3 in favor of the more declarative method above.

nikoskip
  • 1,860
  • 1
  • 21
  • 38
0

It is hard to change from outside. But it is easy to change from inside. Your App running in the pod can change it. Just oppose an Api to change environment variable.

Newton Zou
  • 558
  • 1
  • 5
  • 20
0

You can use configmap with volumes to update environment variables on the go..

Refer: https://itnext.io/how-to-automatically-update-your-kubernetes-app-configuration-d750e0ca79ab

Ganesh Shinde
  • 65
  • 2
  • 7