0

I am trying to understand master/node concept depoyment in labs.playwithk8s.com https://labs.play-with-k8s.com/

I have two nodes and one master.

It has the following config memory.

enter image description here

node1 ~]$ kubectl describe pod myapp-7f4dffc449-qh7pk
Name:           myapp-7f4dffc449-qh7pk
Namespace:      default
Priority:       0
Node:           node3/192.168.0.16
Start Time:     Tue, 07 Feb 2023 12:31:23 +0000
Labels:         app=myapp
                pod-template-hash=7f4dffc449
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/myapp-7f4dffc449
Containers:
  myapp:
    Container ID:   
    Image:          changan1111/newdocker:latest
    Image ID:       
    Port:           3000/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:                500m
      ephemeral-storage:  1Gi
      memory:             1Gi
    Requests:
      cpu:                500m
      ephemeral-storage:  1Gi
      memory:             1Gi
    Environment:          <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-t4nf7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-t4nf7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-t4nf7
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason               Age   From               Message
  ----     ------               ----  ----               -------
  Normal   Scheduled            34s   default-scheduler  Successfully assigned default/myapp-7f4dffc449-qh7pk to node3
  Normal   Pulling              31s   kubelet            Pulling image "changan1111/newdocker:latest"
  Warning  Evicted              25s   kubelet            The node was low on resource: ephemeral-storage.
  Warning  ExceededGracePeriod  15s   kubelet            Container runtime did not kill the pod within specified grace period.

My yaml file is here: https://raw.githubusercontent.com/changan1111/UserManagement/main/kube/kube.yaml

Looks like that i am not seeing anything wrong.. but still i am seeing The node was low on resource: ephemeral-storage

How to resolve this?

Disk Usage:
overlay          10G  130M  9.9G   2% /
tmpfs            64M     0   64M   0% /dev
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/sdb         64G   29G   36G  44% /etc/hosts
shm              64M     0   64M   0% /dev/shm
shm              64M     0   64M   0% /var/lib/docker/containers/403c120b0dd0909bd34e66d86c58fba18cd71468269e1aaa66e3244d331c3a1e/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/56dd63dad42dd26baba8610f70f1a0bd22fdaea36742c32deca3c196ce181851/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/50c4585ae8cc63de9077c1a58da67cc348c86a6643ca21a06b8998f94a2a2daf/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/6e9529ad6e6a836e77b17c713679abddf861fdc0e86946484dc2ec68a00ca2ff/mounts/shm
tmpfs            16G   12K   16G   1% /var/lib/kubelet/pods/8e56095e-b0ec-4f13-a022-d29d04897410/volumes/kubernetes.io~secret/kube-proxy-token-j7sl8
shm              64M     0   64M   0% /var/lib/docker/containers/2b84d6dfebd4ea0c379588985cd43b623004632e71d63d07a39d521ddf694e8e/mounts/shm
tmpfs            16G   12K   16G   1% /var/lib/kubelet/pods/1271ca18-97d0-48d2-9280-68eb8c57795f/volumes/kubernetes.io~secret/kube-router-token-rmpqv
shm              64M     0   64M   0% /var/lib/docker/containers/c4506095bf36356790795353862fc13b759d72af8edc0e4233341f2d3234fa02/mounts/shm
tmpfs            16G   12K   16G   1% /var/lib/kubelet/pods/39885a73-d724-4be8-a9cf-3de8756c5b0c/volumes/kubernetes.io~secret/coredns-token-ckxbw
tmpfs            16G   12K   16G   1% /var/lib/kubelet/pods/8f137411-3af6-4e44-8be4-3e4f79570531/volumes/kubernetes.io~secret/coredns-token-ckxbw
shm              64M     0   64M   0% /var/lib/docker/containers/c32431f8e77652686f58e91aff01d211a5e0fb798f664ba675715005ee2cd5b0/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/3e284dd5f9b321301647eeb42f9dd82e81eb78aadcf9db7b5a6a3419504aa0e9/mount


Events:
  Type     Reason     Age    From               Message
  ----     ------     ----   ----               -------
  Normal   Scheduled  3m16s  default-scheduler  Successfully assigned default/myapp-b5856bb-4znkj to node4
  Normal   Pulling    3m15s  kubelet            Pulling image "changan1111/newdocker:latest"
  Normal   Pulled     83s    kubelet            Successfully pulled image "changan1111/newdocker:latest" in 1m51.97169753s
  Normal   Created    28s    kubelet            Created container myapp
  Normal   Started    27s    kubelet            Started container myapp
  Warning  Evicted    1s     kubelet            Pod ephemeral local storage usage exceeds the total limit of containers 500Mi.
  Normal   Killing    1s     kubelet            Stopping container myapp

yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      imagePullSecrets: 
      -  name: dockercreds
      containers:
      - name: myapp
        image: changan1111/newdocker:latest
        resources:
          limits:
            memory: "2Gi"
            cpu: "500m"
            ephemeral-storage: "2Gi"
          requests:
            ephemeral-storage: "1Gi"
            cpu: "500m"
            memory: "1Gi"
          
          

        ports:
        - containerPort: 3000


---
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000
    nodePort: 31110
  type: LoadBalancer
   
ChanGan
  • 4,254
  • 11
  • 74
  • 135
  • check the node disk usage df -h – Bijendra Feb 07 '23 at 12:48
  • added disk usage – ChanGan Feb 08 '23 at 07:31
  • @ChanGan, Is your issue resolved? If No, Refer to this [Youtube video](https://www.google.com/search?q=kubernetes+play+with+k8s+video&rlz=1CAZVTZ_enIN1010&ei=uiMkZJfgD8Gz4-EPvJ-jkA4&oq=kubernetes+playwithk8s+vidoe&gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAxgAMgcIIRCgARAKMgcIIRCgARAKOgoIABBHENYEELADOggIABAFEB4QDToICAAQigUQhgM6BAghEBVKBAhBGABQhAJYiQ5gviRoAXABeACAAeQBiAGeB5IBBTAuNS4xmAEAoAEByAEIwAEB&sclient=gws-wiz-serp#fpstate=ive&vld=cid:e0824b92,vid:bkrcAjclqYI) and the below Answer & Comments, which may help to resolve your issue. Let me know if you require further support. – Veera Nagireddy Mar 29 '23 at 11:51

1 Answers1

0

Worker nodes may be running out of disk space in which case you should see something like no space left on device or The node was low on resource: ephemeral-storage.

Mitigation is to specify larger disk size for node VMs during Composer environment creation.

Pod eviction and scheduling problems are side effects of Kubernetes limits and requests, usually caused by a lack of planning. See Understanding Kubernetes pod evicted and scheduling problems for more information.

Refer to the similar SO how to set a quota limits.ephemeral-storage, requests.ephemeral-storage to limit this, as otherwise any container can write any amount of storage to its node filesystem.

Warning : Pod ephemeral local storage usage exceeds the total limit of containers 500Mi.

It may be because you're putting an upper limit of ephemeral-storage usage by setting resources.limits.ephemeral-storage to 500Mi. Try removing the limits.ephemeral-storage if safe or change the value depending upon your requirement.

Also see How to determine kubernetes pod ephemeral storage request and limit and how to Avoid running out of ephemeral storage space on your Kubernetes worker Nodes for more information.

Veera Nagireddy
  • 1,656
  • 1
  • 3
  • 12