5

I have 6 google nodes with single core and kube-system pods take too much of CPU.

  default                    scylla-2                                            200m (21%)    500m (53%)  1Gi (38%)        1Gi (38%)
  kube-system                fluentd-gcp-v2.0.9-p9pvs                            100m (10%)    0 (0%)      200Mi (7%)       300Mi (11%)
  kube-system                heapster-v1.4.3-dcd99c9f8-n6wb2                     138m (14%)    138m (14%)  301856Ki (11%)   301856Ki (11%)
  kube-system                kube-dns-778977457c-gctgs                           260m (27%)    0 (0%)      110Mi (4%)       170Mi (6%)
  kube-system                kube-dns-autoscaler-7db47cb9b7-l9jhv                20m (2%)      0 (0%)      10Mi (0%)        0 (0%)
  kube-system                kube-proxy-gke-scylla-default-pool-f500679a-7dhh    100m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kubernetes-dashboard-6bb875b5bc-n4xsm               100m (10%)    100m (10%)  100Mi (3%)       300Mi (11%)
  kube-system                l7-default-backend-6497bcdb4d-cncr4                 10m (1%)      10m (1%)    20Mi (0%)        20Mi (0%)
  kube-system                tiller-deploy-dccdb6fd9-7hd2s                       0 (0%)        0 (0%)      0 (0%)           0 (0%)

Is there easy way to lower CPU request/limit for all kube-system pods in 10 times?

I understand memory is needed to function properly but CPU could be lowered without any major issue in dev environment. What happens if DNS would work 10 times slower? 27% of node for single system dns pod is too much.

user3130782
  • 841
  • 1
  • 6
  • 15
  • [This](https://stackoverflow.com/a/55330880/5774603) answer helped me with the same problem by using vertical autoscaling. – Denis Isaev Dec 26 '20 at 18:54

2 Answers2

3

As peer the documentation To specify a CPU request for a Container, include the resources:requests field in the Container’s resource manifest. To specify a CPU limit, include resources:limits see exemple below:

apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo
  namespace: cpu-example
spec:
  containers:
  - name: cpu-demo-ctr
    image: vish/stress
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: "0.5"
    args:
    - -cpus
    - "2"

One cpu, in GCP Kubernetes, is equivalent to: 1 GCP Core. the CPU limit for a Pod is the sum of the CPU limits for all the Containers in the Pod.

Pod scheduling is based on requests. A Pod is scheduled to run on a Node only if the Node has enough CPU resources available to satisfy the Pod’s CPU request.

Alioua
  • 1,663
  • 1
  • 9
  • 18
  • It fine for pods I create but if there way to lower it for `kube-system` – user3130782 Jul 05 '18 at 07:13
  • 1
    It is not easy to update container resource on the fly.the specialy for kube-system like Kube-DNS. you may get a forbiden message like { spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds .....}. For now using deployment to manage your container workload and trigger rolling update is a good idea – Alioua Jul 05 '18 at 16:17
  • 1
    In the testing environment, you may change it via `kubectl edit -n kube-system deployment kube-dns`. Because of the rolling-update nature of the change, a new ReplicaSet is created. If you're resource-constrained that the new pods can't be scheduled, you may need to manually delete the old ReplicaSet and its pods. – Krab Nov 14 '18 at 10:11
0

You can create a default cpu-request-limit manifest and apply it to the kube-system namespace:

Now if a Container is created in the kube-system namespace, and the Container does not specify its own values for CPU request and CPU limit, the Container is given a default CPU request of 0.5 and a default CPU limit of 1.

https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/#create-a-limitrange-and-a-pod

Notauser
  • 406
  • 2
  • 10