8

Kubernetes kind Deployment doesn't allow patch changes in spec.selector.matchLabels, so any new deployments (managed by Helm or otherwise) that want to change the labels can't use the RollingUpdate feature within a Deployment. What's the best way to achieve a rollout of a new deployment without causing downtime?

Minimum example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: foo
  template:
    metadata:
      labels:
        app: foo
    spec:
      containers:
        - name: foo
          image: ubuntu:latest
          command: ["/bin/bash", "-ec", "sleep infinity"]

Apply this, then edit the labels (both matchLabels and metadata.labels) to foo2. If you try to apply this new deployment, k8s will complain (by design) The Deployment "foo" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"foo2"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable.

The only way I can think of right now is to use a new Deployment name so the new deployment does not try to patch the old one, and then delete the old one, with the ingress/load balancer resources handling the transition. Then we can redeploy with the old name, and delete the new name, completing the migration.

Is there a way to do it with fewer k8s CLI steps? Perhaps I can edit/delete something that keeps the old pods alive while the new pods roll out under the same name?

snugghash
  • 450
  • 3
  • 13
  • if it is managed by helm, then better deploy with helm again and set the new values you need. How many instances are running? – Manuel Mar 18 '21 at 19:52
  • The number of pods varies, but there's only one Helm release. Regardless of deployment with Helm(3), Kinds with matchLabels (Jobs, Deployments) cannot be updated and need to be recreated (delete + create). I've added a minimum example to illustrate the issue. – snugghash Mar 19 '21 at 04:19

2 Answers2

1

I just did this, and I followed the four-step process you describe. I think the answer is no, there is no better way.

My service was managed by Helm. For that I literally created four merge requests that needed to be rolled out sequentially:

  1. Add identical deployment "foo-temp", only name is different.
  2. Delete deployment foo.
  3. Recreate deployment foo with desired label selector.
  4. Delete deployment foo-temp.

I tested shortcutting the process (combining step 1 and 2), but it doesn't work - helm deletes one deployment before it creates the other, and then you have downtime.

The good news is: in my case i didn't need to change any other descriptors (charts), so it was not so bad. All the relationships (traffic routing, etc) were made via label matching. Since foo-temp had the same labels, the relationships worked automatically. The only issue was that my HPA referenced the name, not the labels. Instead of modifying it, I left foo-temp without an HPA and just specified a high amount of replicas for it. The HPA didn't complain when its target didn't exist between step 2 and 3.

Fletch
  • 4,829
  • 2
  • 41
  • 55
-2

From my experience, while using helm when I use

helm upgrade release -f values .

I do not get downtime. Also when using helm I noticed that until the new deployment get ready by X/X it does not terminate the old deployment. I can suggest using it. This way it can be as painless as it gets.

Also from the section Updating Deployment from Kubernetes docs it is said that, A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed.

Therefore, you can use label changes with helm.

Hopefully I was a little help.

Beware! untried method: kubectl has an edit subcommand which enabled me to update ConfigMaps, PersistentVolumeClaims and etc. Maybe you can use it to update your Deployment. Syntax:

kubectl edit [resource] [resource-name]

But before doing that please choose a proper text editor since you will be dealing with yaml formatted files. Do so by using,

export KUBE_EDITOR=/bin/{nano,vim,yourFavEditor}
Catastrophe
  • 322
  • 3
  • 12
  • 1
    My question is more about the limitation further down in the same doc: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#label-selector-updates I get that this is a rare upgrade, but it would still be nice to have a standard process for this and not feel like I'm hacking through the k8s undergrowth to get things done. – snugghash Mar 19 '21 at 04:27
  • 1
    I think if it works and give you what you need you do not need to feel that way. Also these kind of changes happen. Just there is no easy one-line solution for this, that i know of. That is all. This method is almost the same as applying the changed yaml file without deleting which forces kubernetes to re-configure the deployment. – Catastrophe Mar 19 '21 at 06:54