0

I created a 3 node cluster in GKE using the below command

gcloud container clusters create kubia --num-nodes 3 --machine-type=f1-micro

The status of all the three nodes is NotReady. When I inspected the node using the kubectl describe <node>, I get the following output:

λ kubectl describe node gke-kubia-default-pool-c324a5d8-2m14
Name:               gke-kubia-default-pool-c324a5d8-2m14
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/fluentd-ds-ready=true
                    beta.kubernetes.io/instance-type=f1-micro
                    beta.kubernetes.io/os=linux
                    cloud.google.com/gke-nodepool=default-pool
                    cloud.google.com/gke-os-distribution=cos
                    failure-domain.beta.kubernetes.io/region=asia-south1
                    failure-domain.beta.kubernetes.io/zone=asia-south1-a
                    kubernetes.io/hostname=gke-kubia-default-pool-c324a5d8-2m14
Annotations:        container.googleapis.com/instance_id: 1338348980238562031
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 02 Jan 2020 11:52:25 +0530
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Conditions:
  Type                          Status    LastHeartbeatTime                 LastTransitionTime
          Reason                          Message
  ----                          ------    -----------------                 ------------------
          ------                          -------
  KernelDeadlock                False     Thu, 02 Jan 2020 11:52:30 +0530   Thu, 02 Jan 2020 11:52:29 +0530   KernelHasNoDeadlock             kernel has no deadlock
  ReadonlyFilesystem            False     Thu, 02 Jan 2020 11:52:30 +0530   Thu, 02 Jan 2020 11:52:29 +0530   FilesystemIsNotReadOnly         Filesystem is not read-only
  CorruptDockerOverlay2         False     Thu, 02 Jan 2020 11:52:30 +0530   Thu, 02 Jan 2020 11:52:29 +0530   NoCorruptDockerOverlay2         docker overlay2 is functioning properly
  FrequentUnregisterNetDevice   False     Thu, 02 Jan 2020 11:52:30 +0530   Thu, 02 Jan 2020 11:52:29 +0530   NoFrequentUnregisterNetDevice   node is functioning properly
  FrequentKubeletRestart        False     Thu, 02 Jan 2020 11:52:30 +0530   Thu, 02 Jan 2020 11:52:29 +0530   NoFrequentKubeletRestart        kubelet is functioning properly
  FrequentDockerRestart         False     Thu, 02 Jan 2020 11:52:30 +0530   Thu, 02 Jan 2020 11:52:29 +0530   NoFrequentDockerRestart         docker is functioning properly
  FrequentContainerdRestart     False     Thu, 02 Jan 2020 11:52:30 +0530   Thu, 02 Jan 2020 11:52:29 +0530   NoFrequentContainerdRestart     containerd is functioning properly
  NetworkUnavailable            False     Thu, 02 Jan 2020 11:52:31 +0530   Thu, 02 Jan 2020 11:52:31 +0530   RouteCreated                    RouteController created a route
  MemoryPressure                Unknown   Thu, 02 Jan 2020 11:52:52 +0530   Thu, 02 Jan 2020 11:53:38 +0530   NodeStatusUnknown               Kubelet stopped posting node status.
  DiskPressure                  Unknown   Thu, 02 Jan 2020 11:52:52 +0530   Thu, 02 Jan 2020 11:53:38 +0530   NodeStatusUnknown               Kubelet stopped posting node status.
  PIDPressure                   Unknown   Thu, 02 Jan 2020 11:52:52 +0530   Thu, 02 Jan 2020 11:53:38 +0530   NodeStatusUnknown               Kubelet stopped posting node status.
  Ready                         Unknown   Thu, 02 Jan 2020 11:52:52 +0530   Thu, 02 Jan 2020 11:53:38 +0530   NodeStatusUnknown               Kubelet stopped posting node status.
  OutOfDisk                     Unknown   Thu, 02 Jan 2020 11:52:25 +0530   Thu, 02 Jan 2020 11:53:38 +0530   NodeStatusNeverUpdated          Kubelet never posted node status.
Addresses:
  InternalIP:   10.160.0.34
  ExternalIP:   34.93.231.83
  InternalDNS:  gke-kubia-default-pool-c324a5d8-2m14.asia-south1-a.c.k8s-demo-263903.internal
  Hostname:     gke-kubia-default-pool-c324a5d8-2m14.asia-south1-a.c.k8s-demo-263903.internal
Capacity:
 attachable-volumes-gce-pd:  15
 cpu:                        1
 ephemeral-storage:          98868448Ki
 hugepages-2Mi:              0
 memory:                     600420Ki
 pods:                       110
Allocatable:
 attachable-volumes-gce-pd:  15
 cpu:                        940m
 ephemeral-storage:          47093746742
 hugepages-2Mi:              0
 memory:                     236900Ki
 pods:                       110
System Info:
 Machine ID:                 7231bcf8072c0dbd23802d0bf5644676
 System UUID:                7231BCF8-072C-0DBD-2380-2D0BF5644676
 Boot ID:                    819fa587-bd7d-4909-ab40-86b3225f201e
 Kernel Version:             4.14.138+
 OS Image:                   Container-Optimized OS from Google
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.9.7
 Kubelet Version:            v1.13.11-gke.14
 Kube-Proxy Version:         v1.13.11-gke.14
PodCIDR:                     10.12.3.0/24
ProviderID:                  gce://k8s-demo-263903/asia-south1-a/gke-kubia-default-pool-c324a5d8-2m14
Non-terminated Pods:         (7 in total)
  Namespace                  Name                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                               ------------  ----------  ---------------  -------------  ---
  default                    kubia-4hbfv                                        100m (10%)    0 (0%)
      0 (0%)           0 (0%)         27m
  kube-system                event-exporter-v0.2.4-5f88c66fb7-6kh96             0 (0%)        0 (0%)
      0 (0%)           0 (0%)         27m
  kube-system                fluentd-gcp-scaler-59b7b75cd7-8fhkt                0 (0%)        0 (0%)
      0 (0%)           0 (0%)         27m
  kube-system                fluentd-gcp-v3.2.0-796rf                           100m (10%)    1 (106%)    200Mi (86%)      500Mi (216%)   28m
  kube-system                kube-dns-autoscaler-bb58c6784-nkz8g                20m (2%)      0 (0%)
      10Mi (4%)        0 (0%)         27m
  kube-system                kube-proxy-gke-kubia-default-pool-c324a5d8-2m14    100m (10%)    0 (0%)
      0 (0%)           0 (0%)         28m
  kube-system                prometheus-to-sd-qw7sm                             1m (0%)       3m (0%)     20Mi (8%)        20Mi (8%)      28m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                   Requests     Limits
  --------                   --------     ------
  cpu                        321m (34%)   1003m (106%)
  memory                     230Mi (99%)  520Mi (224%)
  ephemeral-storage          0 (0%)       0 (0%)
  attachable-volumes-gce-pd  0            0
Events:
  Type    Reason                   Age                From
    Message
  ----    ------                   ----               ----
    -------
  Normal  Starting                 43m                kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Starting kubelet.
  Normal  NodeHasSufficientMemory  43m (x2 over 43m)  kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    43m (x2 over 43m)  kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     43m (x2 over 43m)  kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  43m                kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Updated Node Allocatable limit across pods
  Normal  NodeReady                43m                kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeReady
  Normal  Starting                 42m                kube-proxy, gke-kubia-default-pool-c324a5d8-2m14  Starting kube-proxy.
  Normal  Starting                 28m                kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Starting kubelet.
  Normal  NodeHasSufficientMemory  28m (x2 over 28m)  kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    28m (x2 over 28m)  kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     28m (x2 over 28m)  kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  28m                kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Updated Node Allocatable limit across pods
  Normal  NodeReady                28m                kubelet, gke-kubia-default-pool-c324a5d8-2m14
    Node gke-kubia-default-pool-c324a5d8-2m14 status is now: NodeReady
  Normal  Starting                 28m                kube-proxy, gke-kubia-default-pool-c324a5d8-2m14  Starting kube-proxy.

Where I am going wrong? I am able to create pods using the kubectl run kubia-3 --image=luksa/kubia --port=8080 --generator=run/v1 command.

zilcuanu
  • 3,451
  • 8
  • 52
  • 105
  • 3
    The node is unreachable. You can see the taint `Taints: node.kubernetes.io/unreachable:NoSchedule` which is added when the node is unreachable or the kubelet on the node is not running. Check if the node is up and if it is up, check the kubelet logs. – Shashank V Jan 02 '20 at 07:15
  • 1
    how to check if the node is up and check the kubelet logs? I am new to Kubernetes – zilcuanu Jan 02 '20 at 07:16
  • 2
    You can probably ping the node IP to see if it is up. If it is up, you can SSH into the node and check the kubelet logs - https://stackoverflow.com/questions/34113476/where-are-the-kubernetes-kubelet-logs-located – Shashank V Jan 02 '20 at 07:20
  • How long did you wait after cluster creation? It usually takes a while for a node to be Ready. – FL3SH Jan 02 '20 at 07:26
  • @FL3SH I waited for more than 20 minutes. FIrst the status was READY and later changed to NOTREADY – zilcuanu Jan 02 '20 at 07:27
  • 2
    I think it might be hard to find the issue, I see `NodeStatusUnknown` and `NodeStatusNeverUpdated`, but I don't know what cause the issue. I would create a cluster from the google portal and then use `Equivalent REST or command line` to obtaion command to use it later. – FL3SH Jan 02 '20 at 07:40
  • 1
    You are going wrong when you create a k8s cluster with micro-machines as nodes. – suren Jan 02 '20 at 10:30

1 Answers1

2

I just created a Public issue tracker to follow-up on this issue.

In the meantime as a workaround. I would recommend deploying nodes with the default machine type n1-standard-1

  • I had the exact the issue that person reported, and if I was to use n1-standard-1, instead of "f1-micro", the problem is resolved right away. – user3595231 Jan 18 '20 at 01:06