2

I have 3-node kubernetes, host names are host_1, host_2, host_3.

$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
host_1     Ready     master    134d      v1.10.1
host_2     Ready     <none>    134d      v1.10.1
host_3     Ready     <none>    134d      v1.10.1

I have defined 3 local persistent volumes of size 100M, mapped to a local directory on each node. I used the following descriptor 3 times where <hostname> is one of: host_1, host_2, host_3:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-volume-<hostname>
spec:
  capacity:
    storage: 100M
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /opt/jnetx/volumes/test-volume
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - <hostname>

After applying three such yamls, I have the following:

$ kubectl get pv
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM    STORAGECLASS    REASON    AGE
test-volume-host_1   100M       RWO            Delete           Available            local-storage             58m
test-volume-host_2   100M       RWO            Delete           Available            local-storage             58m
test-volume-host_3   100M       RWO            Delete           Available            local-storage             58m

Now, I have a very simple container that writes to a file. The file should be located on the local persistent volume. I deploy it as a statefulset with 1 replica and map volumes via statefulset's volumeClaimTemplates:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: filewriter
spec:
  serviceName: filewriter
  ...
  replicas: 1
  template:
    spec:
      containers:
        - name: filewriter
          ...
          volumeMounts:
          - mountPath: /test/data
            name: fw-pv-claim
  volumeClaimTemplates:
  - metadata:
      name: fw-pv-claim
    spec:
      accessModes:
      - ReadWriteOnce
      storageClassName: local-storage
      resources:
        requests:
          storage: 100M

The volume claim seems to have been created ok and bound to pv on the first host:

$ kubectl get pv
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                              STORAGECLASS    REASON    AGE
test-volume-host_1   100M       RWO            Delete           Bound       default/fw-pv-claim-filewriter-0   local-storage             1m
test-volume-host_2   100M       RWO            Delete           Available                                      local-storage             1h
test-volume-host_3   100M       RWO            Delete           Available                                      local-storage             1h

But, the pod hangs in Pending state:

$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
filewriter-0                 0/1       Pending   0          4s

If we describe, we can see the following errors:

$ kubectl describe pod filewriter-0
Name:           filewriter-0
...
Events:
  Type     Reason            Age              From               Message
  ----     ------            ----             ----               -------
  Warning  FailedScheduling  2s (x8 over 1m)  default-scheduler  0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) had volume node affinity conflict. 

Can you help me figure out what is wrong? Why can't it just create the pod?

Gena L
  • 420
  • 1
  • 5
  • 16

2 Answers2

4

It seems that the one node where the PV is available has a taint that your StatefulSet does not have toleration for.

Radek 'Goblin' Pieczonka
  • 21,554
  • 7
  • 52
  • 48
  • Yes, turned out that first node (master) was prohibiting custom containers from running on it, but first PVC was bound to that first node. What you said was obvious from the error message, but your answer made me just go read about kubernetes taints :))) – Gena L Sep 07 '18 at 12:57
  • @GenaL can you please tell what the issue was? I am also facing a similar issue here. – Abhijit Oct 29 '18 at 13:09
  • As I mentioned above, in my case the issue was that custom pods were not allowed to be scheduled to master node. That's default k8s installation option. You can check via kubectl describe node . Here is described how to change that: https://stackoverflow.com/questions/43147941/allow-scheduling-of-pods-on-kubernetes-master – Gena L Oct 31 '18 at 07:09
1

I had a very similar case to the above and observed the same symptom (node affinity conflict). In my case the issue was that I had 2 volumes attached to 2 different nodes but was trying to use them within 1 pod.

I detected this by using kubectl describe pvc name-of-pvc and noting the selected-node annotation. Once I set the pod to use volumes that were both on one node, I no longer had issues.

Alex Moore-Niemi
  • 2,913
  • 2
  • 24
  • 22