2

I am new to Kubernetes and was following this guide.

on deploying EFK stack on a local cluster. After I create the statefulset.yml file, when I try to kubectl create -f statefulset.yml, the pods never startup.

On running kubectl rollout status ... I get Waiting for 3 pods to be ready...

I am using a docker-desktop cluster.

NAME                  READY   STATUS    RESTARTS   AGE
pod/elasticsearch-0   0/1     Pending   0          45m


NAME                    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
service/elasticsearch   ClusterIP   None         <none>        9200/TCP,9300/TCP   30h




NAME                             READY   AGE
statefulset.apps/elasticsearch   0/3     45m


Name:           elasticsearch-0
Namespace:      logging
Priority:       0
Node:           <none>
Labels:         app=elasticsearch
                controller-revision-hash=elasticsearch-6dd997c6d8
                statefulset.kubernetes.io/pod-name=elasticsearch-0
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  StatefulSet/elasticsearch
Init Containers:
  fix-permissions:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      chown -R 1000:1000 /usr/share/elasticsearch/data
    Environment:  <none>
    Mounts:
      /usr/share/elasticsearch/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gvqz5 (ro)
  increase-vm-max-map:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sysctl
      -w
      vm.max_map_count=262144
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gvqz5 (ro)
  increase-fd-ulimit:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      ulimit -n 65536
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gvqz5 (ro)
Containers:
  elasticsearch:
    Image:       docker.elastic.co/elasticsearch/elasticsearch:7.2.0
    Ports:       9200/TCP, 9300/TCP
    Host Ports:  0/TCP, 0/TCP
    Limits:
      cpu:  1
    Requests:
      cpu:  100m
    Environment:
      cluster.name:                  k8s-logs
      node.name:                     elasticsearch-0 (v1:metadata.name)
      discovery.seed_hosts:          elasticsearch-0.elasticsearch,elasticsearch-1.elasticsearch,elasticsearch-2.elasticsearch
      cluster.initial_master_nodes:  elasticsearch-0,elasticsearch-1,elasticsearch-2
      ES_JAVA_OPTS:                  -Xms512m -Xmx512m
    Mounts:
      /usr/share/elasticsearch/data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gvqz5 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-elasticsearch-0
    ReadOnly:   false
  default-token-gvqz5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gvqz5
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  3m39s (x44 over 64m)  default-scheduler  pod has unbound immediate PersistentVolumeClaims
Wytrzymały Wiktor
  • 11,492
  • 5
  • 29
  • 37
DarthSett
  • 21
  • 1
  • 5
  • Hi DarthSett, what does it show when you run `kubectl get all -n `? Do you see some pods initializing? – Ullaakut Feb 08 '20 at 18:02
  • @Ullaakut I added the pic in the original post. – DarthSett Feb 08 '20 at 18:12
  • Usually that `kubectl` command would print out text, not an image. Can you replace the picture with the actual output of the command? (`kubectl describe -n logging pod elasticsearch-0` also might have some hints at the end.) – David Maze Feb 08 '20 at 18:23
  • [pod has unbound PersistentVolumeClaims](https://stackoverflow.com/questions/52668938/pod-has-unbound-persistentvolumeclaims) describes that error in more detail. I can't find a specific question that describes a Docker Desktop workaround. – David Maze Feb 08 '20 at 19:04
  • IDK why but its running after I restarted my entire cluster. – DarthSett Feb 08 '20 at 22:39
  • @DarthSett, In case it is solved, but still in such cases, you should go and describe the PVC as well. Just see , if your PV are bounded with PVCs. I had also faced this issue as my PV was not getting bound to PVC bcz of missing storage class. – Nish Feb 10 '20 at 07:20

0 Answers0