248

I do have deployment with single pod, with my custom docker image like:

containers:
  - name: mycontainer
    image: myimage:latest

During development I want to push new latest version and make Deployment updated. Can't find how to do that, without explicitly defining tag/version and increment it for each build, and do

kubectl set image deployment/my-deployment mycontainer=myimage:1.9.1
Andriy Kopachevskyy
  • 7,276
  • 10
  • 47
  • 56

8 Answers8

259

You can configure your pod with a grace period (for example 30 seconds or more, depending on container startup time and image size) and set "imagePullPolicy: "Always". And use kubectl delete pod pod_name. A new container will be created and the latest image automatically downloaded, then the old container terminated.

Example:

spec:
  terminationGracePeriodSeconds: 30
  containers:
  - name: my_container
    image: my_image:latest
    imagePullPolicy: "Always"

I'm currently using Jenkins for automated builds and image tagging and it looks something like this:

kubectl --user="kube-user" --server="https://kubemaster.example.com"  --token=$ACCESS_TOKEN set image deployment/my-deployment mycontainer=myimage:"$BUILD_NUMBER-$SHORT_GIT_COMMIT"

Another trick is to intially run:

kubectl set image deployment/my-deployment mycontainer=myimage:latest

and then:

kubectl set image deployment/my-deployment mycontainer=myimage

It will actually be triggering the rolling-update but be sure you have also imagePullPolicy: "Always" set.

Update:

another trick I found, where you don't have to change the image name, is to change the value of a field that will trigger a rolling update, like terminationGracePeriodSeconds. You can do this using kubectl edit deployment your_deployment or kubectl apply -f your_deployment.yaml or using a patch like this:

kubectl patch deployment your_deployment -p \
  '{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":31}}}}'

Just make sure you always change the number value.

Camil
  • 7,800
  • 2
  • 25
  • 28
  • 1
    Actually it's your trick is not bad, considering myimage:lastet and myimage basically same thing, thanks! – Andriy Kopachevskyy Nov 02 '16 at 11:26
  • 3
    This trick seems more like a bug, not sure why we need to specify it twice. – speedplane Jun 29 '17 at 20:19
  • 2
    If you want a kubernetes deployment to start a new pod using the same image (and this trick only works with the "latest" tag) you have to specify it without a tag. Next time add the "latest" tag and it will trigger the update. The order could be reversed, it doesn't matter. You never use the "latest" tag in production, but for development purposes you can benefit from it sometimes. – Camil Jun 29 '17 at 20:35
  • What does the command under "second time" refer to? When should it be run? – Chris Stryczynski Aug 07 '17 at 12:52
  • This gives me an error of "error validating data: found invalid field imagePullPolicy for v1.PodSpec;" – Chris Stryczynski Aug 07 '17 at 13:11
  • @ChrisStryczynski The "latest" tag it's assumed in case it's not specified. If you deploy repo/image and want to do a rolling update with the repo/image , it will not work because it has the same tag, but kubernetes will do the update if you add "latest". If you want to update again, you have to use repo/image without the tag and so on... – Camil Aug 07 '17 at 14:39
  • @ChrisStryczynski You get the error above when you execute this command `kubectl set image deployment/my-deployment mycontainer=myimage`? Or you pass another argument to the command? – Camil Aug 07 '17 at 14:44
  • I get the error when doing a `kubectl apply -f example.yaml`. I think the 'imagePullPolicy' should be indented by one level. – Chris Stryczynski Aug 07 '17 at 14:49
  • Can anybody explain what the last command actually does? Will it work for images not tagged with `:latest`? – Chris Stryczynski Aug 09 '17 at 17:36
  • 2
    It only work for latest. By default, at least in docker hub, by not tagging an image it will assume the "latest" tag. But will also work without it. This example is not something you'll want in a production environment, and there are not many use cases where you can benefit from it in development also. There are better methods to update an image automatically, using a CI/CD tool. – Camil Aug 09 '17 at 18:38
  • 15
    Every time you change the tag and run `kubectl set image` command, kubernetes will perform a rolling update. For example, let's say you deployed "repo/myimage:latest". Meanwhile your image was changed and pushed to the repo with the "v0.2" tag. You can perform an update by running `kubectl set image deployment/my-deployment mycontainer=myimage:v0.2` This image will also have the "latest" tag. – Camil Aug 09 '17 at 18:47
  • Because manny times you are not aware of the tag, which can include multiple informations in it, like git commit, it's easier to use "kubectl set image deployment/my-deployment mycontainer=myimage" and it will do the update – Camil Aug 09 '17 at 18:47
  • Here's a bash function for the patch deployment method: https://gist.github.com/jmound/ff6fa539385d1a057c82fa9fa739492e – thisjustin Mar 22 '18 at 14:05
  • When you say "configure the pod", does that mean updating the deployment or service file with `"imagePullPolicy: "Always"`? – Sean Pianka Sep 19 '18 at 19:44
  • I noticed the last trick with the date leaves behind a bunch of empty Replica Sets. Is this ok? – atkayla Oct 15 '18 at 19:16
  • A random environment variable is a better option than random terminationGracePeriodSeconds! – Tummala Dhanvi May 13 '20 at 19:00
  • In my case I discovered that the problem was the cached build steps of my image. So I needed to run `docker build --no-cache` to recreate the image. – axell-brendow Jan 11 '21 at 04:09
  • You cannot have a good traceability of the images you used, and it's not that easy to roll back updates if you always use the same image names. – Raúl Salinas-Monteagudo Jan 27 '23 at 10:44
173

UPDATE 2019-06-24

Based on the @Jodiug comment if you have a 1.15 version you can use the command:

kubectl rollout restart deployment/demo

Read more on the issue:

https://github.com/kubernetes/kubernetes/issues/13488


Well there is an interesting discussion about this subject on the kubernetes GitHub project. See the issue: https://github.com/kubernetes/kubernetes/issues/33664

From the solutions described there, I would suggest one of two.

First

1.Prepare deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo
        image: registry.example.com/apps/demo:master
        imagePullPolicy: Always
        env:
        - name: FOR_GODS_SAKE_PLEASE_REDEPLOY
          value: 'THIS_STRING_IS_REPLACED_DURING_BUILD'

2.Deploy

sed -ie "s/THIS_STRING_IS_REPLACED_DURING_BUILD/$(date)/g" deployment.yml
kubectl apply -f deployment.yml

Second (one liner):

kubectl patch deployment web -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"

Of course the imagePullPolicy: Always is required on both cases.

Vitaliy Ulantikov
  • 10,157
  • 3
  • 61
  • 54
Przemek Nowak
  • 7,173
  • 3
  • 53
  • 57
  • 3
    Found another related trick. If you just do "kubectl rollout restart deployment" without specifying any specific deployment namer ity will do "all" of them. – Mr. Developerdude Dec 21 '19 at 23:12
  • 1
    In my case I discovered that the problem was the cached build steps of my image. So I needed to run `docker build --no-cache` to recreate the image. – axell-brendow Jan 11 '21 at 04:10
  • All the quote escaping with `\"` makes it look ugly. – Wyck Jan 16 '21 at 18:33
  • What works for us is even more straight forward, based on Przemek's answer, add an environment variable to the deployment referencing a variable that holds the git commit SHA, with imagePullPolicy this leads to re-pulling of the image on every deployment. – Niko S P Apr 02 '21 at 18:54
  • In my case (on GitLab CI) making a sed adding the commit sha is the best solution for me: `sed -ie "s/CI_COMMIT_SHA/$CI_COMMIT_SHA)/g" deployment.yml See: https://github.com/kubernetes/kubernetes/issues/33664#issuecomment-786738863 – Paolo Falomo Oct 15 '22 at 13:58
  • The rollout restart may not work in case the quota is limited and creation of additional pod in not possible. In this case manual deletion of the current pod might be required. – minus one Feb 09 '23 at 17:40
51
kubectl rollout restart deployment myapp

This is the current way to trigger a rolling update and leave the old replica sets in place for other operations provided by kubectl rollout like rollbacks.

Martin Peter
  • 3,565
  • 2
  • 23
  • 26
18

I use Gitlab-CI to build the image and then deploy it directly to GCK. If use a neat little trick to achieve a rolling update without changing any real settings of the container, which is changing a label to the current commit-short-sha.

My command looks like this:

kubectl patch deployment my-deployment -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"build\":\"$CI_COMMIT_SHORT_SHA\"}}}}}}"

Where you can use any name and any value for the label as long as it changes with each build.

Have fun!

David Faber
  • 396
  • 4
  • 3
10

We could update it using the following command:

kubectl set image deployment/<<deployment-name>> -n=<<namespace>> <<container_name>>=<<your_dockerhub_username>>/<<image_name you want to set now>>:<<tag_of_the_image_you_want>>

For example,

kubectl set image deployment/my-deployment -n=sample-namespace my-container=alex/my-sample-image-from-dockerhub:1.1

where:

  • kubectl set image deployment/my-deployment - we want to set the image of the deployment named my-deployment
  • -n=sample-namespace - this deployment belongs to the namespace named as sample-namespace. If your deployment belongs to the default namespace, no need to mention this part in your command.
  • my-container is the container name which was previously mentioned in the YAML file of your deployment configuration.
  • alex/my-sample-image-from-dockerhub:1.1 is the new image which you want to set for the deployment and run the container for. Here, alex is the username of the dockerhub image(if applicable), my-sample-image-from-dockerhub:1.1 the image and tag you want to use.
vagdevi k
  • 1,478
  • 9
  • 25
8

It seems that k8s expects us to provide a different image tag for every deployment. My default strategy would be to make the CI system generate and push the docker images, tagging them with the build number: xpmatteo/foobar:456.

For local development it can be convenient to use a script or a makefile, like this:

# create a unique tag    
VERSION:=$(shell date +%Y%m%d%H%M%S)
TAG=xpmatteo/foobar:$(VERSION)

deploy:
    npm run-script build
    docker build -t $(TAG) . 
    docker push $(TAG)
    sed s%IMAGE_TAG_PLACEHOLDER%$(TAG)% foobar-deployment.yaml | kubectl apply -f - --record

The sed command replaces a placeholder in the deployment document with the actual generated image tag.

xpmatteo
  • 11,156
  • 3
  • 26
  • 25
  • 2
    kubernetes does not require you to update the deployment with a new tag in order to pull the most recent version of any image, "latest" being the most common example. – Dave White May 29 '20 at 20:08
5

Another option which is more suitable for debugging but worth mentioning is to check in revision history of your rollout:

$ kubectl rollout history deployment my-dep
deployment.apps/my-dep
 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         <none>

To see the details of each revision, run:

 kubectl rollout history deployment my-dep --revision=2

And then returning to the previous revision by running:

 $kubectl rollout undo deployment my-dep --to-revision=2

And then returning back to the new one.
Like running ctrl+z -> ctrl+y (:

(*) The CHANGE-CAUSE is <none> because you should run the updates with the --record flag - like mentioned here:

kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record

(**) There is a discussion regarding deprecating this flag.

INDIAN2025
  • 191
  • 2
  • 6
Rot-man
  • 18,045
  • 12
  • 118
  • 124
1

I am using Azure DevOps to deploy the containerize applications, I am easily manage to overcome this problem by using the build ID

Everytime its builds and generate the new Build ID, I use this build ID as tag for docker image here is example

imagename:buildID

once your image is build (CI) successfully, in CD pipeline in deployment yml file I have give image name as

imagename:env:buildID

here evn:buildid is the azure devops variable which having value of build ID.

so now every time I have new changes to build(CI) and deploy(CD).

please comment if you need build definition for CI/CD.

  • 1
    The manifest is part of the repo. I don't understand what are the best practices for this. If I build the image in the pipeline, should I push to master the updated manifest? or should I produce an updated manifest to the artifacts (and thus the manifest in the repo would be just a template without the actual tagged image)? – pablete Feb 17 '20 at 15:46