0

I'm running GKE Autopilot cluster with ASM feature.
The cluster is for development environment so I want to curve maintenance cost as cheap as possible.

Because of enabling istio-injection, every pod in the cluster has istio-proxy but the proxy requests nearly 300GiB disk even setting for pod ( from get pod -o yaml) request 1GiB or so.

kubectl get pod <pod-name> -o yaml
...
    resources:
      limits:
        cpu: 250m
        ephemeral-storage: 1324Mi
        memory: 256Mi
      requests:
        cpu: 250m
        ephemeral-storage: 1324Mi
        memory: 256Mi
...

Disk usage

Is the nearly 300GiB disk request is needed for run ASM? Or can I reduce this?

[edited 2023-03-01]

To reproduce this, deploy a yaml below to GKE cluster with ASM. In this case, default namespace must be labeled to use istio-injection.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-test
    service: nginx-test
  name: nginx-test
spec:
  ports:
    - name: http
      port: 80
  selector:
    app: nginx-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-test
  name: nginx-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-test
  template:
    metadata:
      labels:
        app: nginx-test
      annotations:
        sidecar.istio.io/proxyCPU: 250m
        sidecar.istio.io/proxyMemory: 256Mi
    spec:
      containers:
        - image: nginx
          imagePullPolicy: IfNotPresent
          name: nginx-test
          ports:
            - containerPort: 80
akrsum
  • 117
  • 7

1 Answers1

0

If under your impression that 300Gib is more than what you needed for your development then you can reduce or limit the request resources. I can include here a documentation about setting a limit for the resources as reference.

Dion V
  • 356
  • 2
  • 7
  • Thank you for your answer. According to the setting yaml, the proxy will request 1GiB disk but actually requested 300GiB(Most odd part is this). Since I use Autopilot mode in this case, setting for limit will be ignored. And I already set `proxyCpu` and `proxyMemory` [by using annotation](https://cloud.google.com/service-mesh/docs/troubleshooting/troubleshoot-sidecar-proxies#the_istio-proxy_container_is_killed_because_of_a_oom_event), I can't find overwrite setting about ephemeral-starage :( – akrsum Feb 25 '23 at 12:57
  • You can check this reference {https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage} . If this will not limit the size, can you share your yaml file? – Dion V Feb 28 '23 at 19:44
  • Thank you for your answer again! I edited post so please see the yaml. BTW, I found the graph indicate limit of the ephemeral storage, not required one. What I concern is cost for the cluster so if the limit of ephemeral storage doesn't impact to the cost, that is no problem, but I have no idea about it. – akrsum Mar 01 '23 at 00:14
  • The limit doesn't affect your cost. The limit is "how much of this resource I'm willing to pay", you will be charged by usage or requested, wichever is higer – Javier Pazos Mar 15 '23 at 09:30