1

I have a Kubernetes cluster that uses 1.17.17. I want to increase the CPU/RAM of a node using KOPS. When running kops update cluster command, I expect it would return the preview of my old instance type VS new instance type.

However, it returns a long line of will create resources/will modify resources.

I want to know why it shows a long log of changes it will execute instead of showing only the changes I made for instance type. Also, if this is safe to apply the changes. enter image description here

avocadoLambda
  • 1,332
  • 7
  • 16
  • 33
Felix Labayen
  • 385
  • 3
  • 8
  • Considering that Kops uses auto-scale groups, you can't just shut down a node, reconfigure that instance size, and restart it - instance would get deleted and recreated by your IAAS, if manually terminated. Though I'm not very familiar with Kops, from my understanding, configuration updates that would involve changes to nodes are expected to re-deploy new instances: not reconfigure existing ones. – SYN Aug 13 '21 at 06:46
  • The one change you have listed in your screenshot is a create resource change. In this case it looks like are moving from a cluster without bastion to a cluster with bastion. Is that expected? You may want to check the diff of your cluster spec. – Ole Markus With Aug 15 '21 at 08:26

1 Answers1

1

After you will do that cluster update you are going to do rolling update on that cluster. The nodes will be terminated one by one and the new ones are going to show. Also while one node is going down to be replaced with the new one the services inside that node are going to be shifted on that one . Small tip remove all poddistributionbudgets. Also the log is fine dont worry.

  • So this long logs of will create resources/will modify resources are safe? – Felix Labayen Aug 13 '21 at 12:38
  • @FelixLabayen I think its safe. – Klevi Merkuri Aug 16 '21 at 08:24
  • I will replicate my cluster using Velero. Hopefully I will reproduce the updates as well. – Felix Labayen Aug 16 '21 at 10:13
  • not safe. I was stuck on evicting dns-controller pod for more than 2 hours. Maybe you know the reason? https://stackoverflow.com/q/68826587/2503754 – Felix Labayen Aug 18 '21 at 04:36
  • I think when you did the rolling update it was hanging because of pdb(pod distribution budgets thats why it wasnt removing the node and also evicting the pod to the new node. You can try to delete the pdb or any other pdb in your cluster and than do the rolling update --yes. – Klevi Merkuri Aug 18 '21 at 20:54