322

I have the following replication controller in Kubernetes on GKE:

apiVersion: v1
kind: ReplicationController
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 2
  selector:
    app: myapp
    deployment: initial
  template:
    metadata:
      labels:
        app: myapp
        deployment: initial
    spec:
      containers:
      - name: myapp
        image: myregistry.com/myapp:5c3dda6b
        ports:
        - containerPort: 80
      imagePullPolicy: Always
      imagePullSecrets:
        - name: myregistry.com-registry-key

Now, if I say

kubectl rolling-update myapp --image=us.gcr.io/project-107012/myapp:5c3dda6b

the rolling update is performed, but no re-pull. Why?

Torsten Bronger
  • 9,899
  • 7
  • 34
  • 41
  • You should use different image when updating. – xdays Oct 14 '15 at 06:42
  • 28
    I gave a different image, just with the same tag. If it is necessary to give a different tag, well, I see no point in the `imagePullPolicy` field. – Torsten Bronger Oct 14 '15 at 07:00
  • Out of interest, why would you want to do this? The only reason I can think of is using `latest` but if you use `latest`, it always pulls anyway. – Muhammad Rehan Saeed Jul 03 '18 at 15:23
  • 7
    I want to use a specific tag, but its newest version. – Torsten Bronger Jul 03 '18 at 22:26
  • 6
    @TorstenBronger I think this is a breaking change in Kubernetes/Docker theory. The idea that you could pull image:tag (other than latest) at two different times and get two different images would be problematic. A tag is akin to a version number. It would be better practice to always change the tag when the image changes. – duct_tape_coder Mar 12 '19 at 18:41
  • 5
    It depends. There is software with a very stable API but security updates. Then, I want the latest version without having to say so explicitly. – Torsten Bronger Mar 13 '19 at 06:40
  • I am running into this issue now. The reason I want to have the same tag is to make a distinction between my staging and production environments without creating separate projects. And I'm making sure that `cloudbuild.yaml` gets the branch name to create the image version. Is that bad practice? – Martavis P. Jun 18 '19 at 07:04
  • 5
    @TorstenBronger Regarding using `latest`, dont do it. Latest will pull the, well, more recently image with the latest tag. What you want is a SemVer range. ~1.2.3 for example. this will pull images with tags between the range of >= 1.2.3 and < 1.3.0. As long as the image vendor follows [SemVer](https://semver.org/) your know (and this is the important part) no backwards breaking change were added (on purpose) and that no new features were added (possible security concern). Please, please never use `latest` in production systems. – David J Eddy Jul 12 '19 at 12:30
  • 3
    The question if and when to use `latest` is a different story. There are circumstance where it makes sense. – Torsten Bronger Jul 24 '19 at 19:46
  • You could alternatively delete the deployment with kubectl delete command and then reapply if this is development time activity – Ashwin Prabhu Jan 16 '20 at 09:36
  • @TorstenBronger please mark question as answered if you are clear on answer. – GintsGints Feb 06 '21 at 06:03
  • But this question is marked answered for a long time already. – Torsten Bronger Feb 06 '21 at 16:44
  • I wrote a script ``` #!/bin/bash kubectl patch deployment $1 -p '{"spec": {"template": {"spec":{"containers":[{"name": "'$1'", "imagePullPolicy":"Always"}]}}}}' sleep 30 kubectl rollout restart deployment $1 sleep 120 kubectl patch deployment $1 -p '{"spec": {"template": {"spec":{"containers":[{"name": "'$1'", "imagePullPolicy":"IfNotPresent"}]}}}}' ``` – smith64fx Jun 27 '22 at 23:04
  • https://gist.github.com/smyth64/8a32bb02a7354220234425e5a03dcffa I wrote a simple bash script, check it out :) – smith64fx Jun 27 '22 at 23:14

19 Answers19

271

Kubernetes will pull upon Pod creation if either (see updating-images doc):

  • Using images tagged :latest
  • imagePullPolicy: Always is specified

This is great if you want to always pull. But what if you want to do it on demand: For example, if you want to use some-public-image:latest but only want to pull a newer version manually when you ask for it. You can currently:

  • Set imagePullPolicy to IfNotPresent or Never and pre-pull: Pull manually images on each cluster node so the latest is cached, then do a kubectl rolling-update or similar to restart Pods (ugly easily broken hack!)
  • Temporarily change imagePullPolicy, do a kubectl apply, restart the pod (e.g. kubectl rolling-update), revert imagePullPolicy, redo a kubectl apply (ugly!)
  • Pull and push some-public-image:latest to your private repository and do a kubectl rolling-update (heavy!)

No good solution for on-demand pull. If that changes, please comment; I'll update this answer.

mmoya
  • 1,901
  • 1
  • 21
  • 30
Wernight
  • 36,122
  • 25
  • 118
  • 131
  • 4
    You say kubernetes will pull on Pod creation when using `:latest` - what about `patch`ing? does it also always pull the newest/latest image? Seems not to work for me :( – Philipp Kyeck Oct 26 '16 at 11:57
  • It depends if your patch forces the re-creation of a Pod or not. Most likely not, then it'll not pull again. You may kill the Pod manually, or tag with something unique and patch with that updated tag. – Wernight Nov 14 '16 at 14:19
  • 4
    This is an answer to a different question. I asked for *forcing* a re-pull. – Torsten Bronger Dec 02 '16 at 07:11
  • This allowed me to force a new pull from GCR. I had a `:latest` tag which pointed at a new image, and the `kubectl rolling-update` worked to update the pods. – Randy L Mar 30 '17 at 17:48
  • 2
    Thanks. Went for the Pull & Push approach. Automated as much of it as possible with bash scripts but agreed, it's heavy :) – arcseldon Mar 31 '20 at 08:03
  • setting both those optiions are not working and i experimented with it. kuberneltes never pull the new image , although its showing in logs that its pulling the image. – Ijaz Ahmad May 08 '20 at 08:30
  • 2
    How about having for each environment a label like "prod", "stage", "test", leave the imagePullPolicy to "always" and push the label, whenever you want to deploy, to the image that shall be deployed? – Mathias Mamsch Jan 17 '22 at 08:59
149

One has to group imagePullPolicy inside the container data instead of inside the spec data. However, I filed an issue about this because I find it odd. Besides, there is no error message.

So, this spec snippet works:

spec:
  containers:
  - name: myapp
    image: myregistry.com/myapp:5c3dda6b
    ports:
    - containerPort: 80
    imagePullPolicy: Always
  imagePullSecrets:
    - name: myregistry.com-registry-key
Torsten Bronger
  • 9,899
  • 7
  • 34
  • 41
  • 13
    `imagePullPolicy` (or tagging `:latest`) is good if you want to always pull, but doesn't solve the question of pulling on demande. – Wernight Mar 11 '16 at 13:13
  • 4
    Yes, I want to *always* pull, as stated in the question. – Torsten Bronger Dec 02 '16 at 07:12
  • 3
    Using `imagePullPolicy: Always` inside the container definition will have `kubernetes` fetch images tagged with `:latest` whenever a newer version of them is pushed to the registry? – pkaramol Jan 15 '18 at 13:25
  • 3
    @pkaramol No. `imagePullPolicy: Always` simply tells Kubernetes to always pull image from the registry. What image it will is configured by `image` attribute. If you configure it to `image: your-image:latest`, then it will always pull the `your-image` image with the `latest` tag. – Gajus Dec 17 '18 at 06:52
  • I just had the same issue here with a cronjob. The "latest" tag was ignored and only setting the job spec to the always pull policy made k8s reload the image for the next execution (=container creation) something seems to be different between these two options, despite every documentation treating them as equal. – Roman Gruber Oct 30 '20 at 18:14
  • @RomanGruber so I have a similar issue for a cronjob, the POD (in completed status) apprently didn't take the last DOCKER image, will it take when the cronjob executes again? or do i need to recreate again? imagePullPolicy: Always – viruskimera Mar 05 '21 at 04:57
  • @viruskimera - I seem to not be notified about all comments... Anyhow, it worked on my end; when I set the plicy to "always", it did pull the image again upon next execution. – Roman Gruber Jun 12 '21 at 21:58
  • This is part of the solution. After this you need to trigger `kubectl rollout restart deploy ` – Melroy van den Berg Oct 25 '21 at 13:50
113

There is a comand to directly do that:

Create a new kubectl rollout restart command that does a rolling restart of a deployment.

The pull request got merged. It is part of the version 1.15 (changelog) or higher.

S.Spieker
  • 7,005
  • 8
  • 44
  • 50
45

My hack during development is to change my Deployment manifest to add the latest tag and always pull like so

image: etoews/my-image:latest
imagePullPolicy: Always

Then I delete the pod manually

kubectl delete pod my-app-3498980157-2zxhd

Because it's a Deployment, Kubernetes will automatically recreate the pod and pull the latest image.

Everett Toews
  • 10,337
  • 10
  • 44
  • 45
  • I like taking advantage of the "desired state" premises of the "deployment" object... thanks for the suggestion! – Marcello DeSales Apr 10 '18 at 23:08
  • 9
    It's worth noting that strategy is viable only if failures in the service and downtime are tolerable. For development it seems reasonable, but I would never carry this strategy over for a production deploy. – digitaldreamer Jun 28 '18 at 14:52
  • Edit the deployment, changing the imagePullPolicy to always and deleting the pod was enough for me, as Everett suggested. This is a development environment though. https://kubernetes.io/docs/concepts/containers/images/ – Jos Roberto Almaraz Feb 05 '19 at 00:19
  • The "Always" imagePullPolicy is the default for tags named "latest" or no tag. Therefore you don't need to specify it in this example – hookenz Nov 02 '22 at 02:25
33

A popular workaround is to patch the deployment with a dummy annotation (or label):

kubectl patch deployment <name> -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"

Assuming your deployment meets these requirements, this will cause K8s to pull any new image and redeploy.

Tamlyn
  • 22,122
  • 12
  • 111
  • 127
  • 3
    Yes, I use an annotation for this. – Torsten Bronger Mar 21 '19 at 10:15
  • what annotation? – Jeryl Cook Apr 05 '19 at 15:18
  • 3
    Another sophisticated solution would be a combination of both ie. adding an annotation and setting `ImagePullPolicy` as *Always*. annotations like `deployment.kubernetes.io/revision: "v-someversion"` and `kubernetes.io/change-cause: the reason` can be quite helpful and heads towards immutable deployments. – chandan May 22 '19 at 18:14
18

Now, the command kubectl rollout restart deploy YOUR-DEPLOYMENT combined with a imagePullPolicy: Always policy will allow you to restart all your pods with a latest version of your image.

Orabîg
  • 11,718
  • 6
  • 38
  • 58
17
  1. Specify strategy as:
  strategy: 
    type: Recreate
    rollingUpdate: null
  1. Make sure you have different annotation for each deployment. Helm does it like:
  template:
    metadata:
      labels:
        app.kubernetes.io/name: AppName
        app.kubernetes.io/instance: ReleaseName
      annotations:
        rollme: {{ randAlphaNum 5 | quote }}
  1. Specify image pull policy - Always
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: Always
GintsGints
  • 807
  • 7
  • 15
10
# Linux

kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"

# windows

kubectl patch deployment <name> -p (-join("{\""spec\"":{\""template\"":{\""metadata\"":{\""annotations\"":{\""date\"":\""" , $(Get-Date -Format o).replace(':','-').replace('+','_') , "\""}}}}}"))
Bar Nuri
  • 762
  • 1
  • 9
  • 15
10

This answer aims to force an image pull in a situation where your node has already downloaded an image with the same name, therefore even though you push a new image to container registry, when you spin up some pods, your pod says "image already present".

For a case in Azure Container Registry (probably AWS and GCP also provides this):

  1. You can look to your Azure Container Registry and by checking the manifest creation date you can identify what image is the most recent one.

  2. Then, copy its digest hash (which has a format of sha256:xxx...xxx).

  3. You can scale down your current replica by running command below. Note that this will obviously stop your container and cause downtime.

kubectl scale --replicas=0 deployment <deployment-name> -n <namespace-name>
  1. Then you can get the copy of the deployment.yaml by running:
kubectl get deployments.apps <deployment-name> -o yaml > deployment.yaml
  1. Then change the line with image field from <image-name>:<tag> to <image-name>@sha256:xxx...xxx, save the file.

  2. Now you can scale up your replicas again. New image will be pulled with its unique digest.

Note: It is assumed that, imagePullPolicy: Always field is present in the container.

Cenk Cidecio
  • 101
  • 1
  • 6
10

Having gone through all the other answers and not being satisfied, I found much better solution here: https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps

It works without using latest tag or imagePullPolicy: Always. It also works if you push new image to the same tag by specifying image sha256 digest.

Steps:

  1. get image SHA256 from docker hub (see image below)
  2. find your deployment using kubectl get deployments
  3. kubectl set image deployment/<your-deployment> <your_container_name>=<some/image>@sha256:<your sha>
  4. kubectl scale deployment <your-deployment>--replicas=0
  5. kubectl scale deployment <your-deployment>--replicas=original replicas count

Note: Rollout might also work instead of scale but in my case we don't have enough hardware resources to create another instance and k8s gets stuck.

docker hub sha256 location

Anand Rockzz
  • 6,072
  • 5
  • 64
  • 71
ioudas
  • 174
  • 1
  • 6
9

Apparently now when you run a rolling-update with the --image argument the same as the existing container image, you must also specify an --image-pull-policy. The following command should force a pull of the image when it is the same as the container image:

kubectl rolling-update myapp --image=us.gcr.io/project-107012/myapp:5c3dda6b --image-pull-policy Always

sjking
  • 845
  • 1
  • 10
  • 11
  • 2
    Since Kubernetes 1.18 this feature is removed, as stated here: https://v1-18.docs.kubernetes.io/docs/setup/release/notes/#kubectl – S.Spieker Nov 03 '20 at 08:07
7

The rolling update command, when given an image argument, assumes that the image is different than what currently exists in the replication controller.

Robert Bailey
  • 17,866
  • 3
  • 50
  • 58
  • Does this mean the image tag (aka name) must be different? – Torsten Bronger Oct 14 '15 at 07:01
  • Yes, the image name must be different if you pass the `--image` flag. – Robert Bailey Oct 14 '15 at 20:49
  • 2
    As my own answer says, it works also if the image name is the same. It was simply that the imagePullPolicy was in the wrong place. To my defence, the k8s 1.0 docs are erroneous in this aspect. – Torsten Bronger Oct 14 '15 at 21:04
  • Gotta love when the docs are out of sync with the behavior. :/ – Robert Bailey Oct 14 '15 at 23:44
  • The URL is outdated, use this one -> https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/cmd/rollingupdate/rollingupdate.go (Not sure which line, though) – Urosh T. Apr 24 '19 at 14:32
  • 2
    That url is outdated too. – Dan Tenenbaum Jan 04 '20 at 17:32
  • kubectl has been moved into the "staging" part of the kubernetes repository (in preparation for moving to a separate repo in the future). The current link to the file is https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/cmd/rollingupdate/rollingupdate.go – Robert Bailey Jan 06 '20 at 08:12
7

You can define imagePullPolicy: Always in your deployment file.

Sachin Arote
  • 907
  • 13
  • 22
5

I have used kubectl rollout restart for my springboot api and it works.

kubectl rollout restart -f pod-staging.yml --namespace test

Yaml for the Deployment:

apiVersion: "apps/v1"
kind: "Deployment"
metadata:
    name: "my-api"
    labels:
      app: "my-api"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "my-api"
  template:
    metadata:
      labels:
        app: "my-api"
    spec:
      containers:
        - name: my-api
          image: harbor.url.com/mycompany/my-api:staging
          ports:
            - containerPort: 8099
              protocol: TCP
          imagePullPolicy: Always
          livenessProbe:
            httpGet:
              path: /actuator/health/liveness
              port: 8099
            initialDelaySeconds: 90
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /actuator/health/readiness
              port: 8099
            initialDelaySeconds: 90
            periodSeconds: 5
          envFrom:
            - configMapRef:
                name: "my-api-configmap"
          env:
            - name: "TOKEN_VALUE"
              valueFrom:
                secretKeyRef:
                  name: "my-api-secret"
                  key: "TOKEN_VALUE"
          resources:
            requests:
              memory: "512Mi"
              cpu: "500m"
            limits:
              memory: "2048Mi"
              cpu: "1000m"
      imagePullSecrets:
        - name: "my-ci-user"
Wagner Büttner
  • 3,799
  • 1
  • 13
  • 12
3

The Image pull policy will always actually help to pull the image every single time a new pod is created (this can be in any case like scaling the replicas, or pod dies and new pod is created)

But if you want to update the image of the current running pod, deployment is the best way. It leaves you flawless update without any problem (mainly when you have a persistent volume attached to the pod) :)

Ardent Coder
  • 3,777
  • 9
  • 27
  • 53
1

The below solved my problem:

kubectl rollout restart
Abd Abughazaleh
  • 4,615
  • 3
  • 44
  • 53
0

if you want to perform a direct image update on a specific pod, you can use kubectl set image also.

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

CodeTherapy
  • 391
  • 3
  • 13
0

either you have deleted all the pods manually to get it recreated with pulling the image again.

or

run this below command kubectl rollout restart deployment/deployment_name kubectl rollout restart deployment/nginx

this command should recreate all the pods.

for both scenarios imagepullPolicy should be set as Always.

0

A one-liner solution based on invalidating Deployment hash by adding some new unique data, here: a timestamp-based environment variable (just like adding a "volatile" ENV to bust docker cache during image builds):

kubectl set env deployment/nginx REDEPLOY_TIME="$(date)"

or when using oc Client Tools under OCP/OKD:

oc set env dc/nginx REDEPLOY_TIME="$(date)"

It will trigger an automatic rolling re-deployment/re-pull even in older installations of k8s (not just in v1.15 or above, where kubectl rollout restart is the correct solution as described in this answer). In fact I verified this workaround even in the archaic Openshift 3.11 based on k8s 1.11 from mid-2018!

Note we need the usual prerequisites of imagePullPolicy: Always and a "rolling" container image tag such as latest.

Note: kudos and the original idea (using a YAML Deployment manifest file and sed) go back to this comment in the rather long-running k8s issue devoted to this now thankfully gone opinionated choice made initially by k8s devs.

mirekphd
  • 4,799
  • 3
  • 38
  • 59