0

I am deploying kubernete cluster on AWS EKS and using EBS as persist volume. Below is the spec for a StatefulSet pods who are using the volume. It works fine after deployment. But when I delete the pods by running kubectl delete -f spec.yml, the pvc are not deleted. Their status is still Bound. I think it makes sense because deleting the volume will cause loosing data.

When I redeploy the pods kubectl apply -f spec.yml, the first pod is running successfully but the second one failed. kubectl describe pod command gives me this error: 0/1 nodes are available: 1 node(s) had volume node affinity conflict..

It works fine if I delete all pvc. What is the correct way to redeploy all the pods without deleting pvc?

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es
  namespace: default
spec:
  serviceName: es-entrypoint
  replicas: 3
  selector:
    matchLabels:
      name: es
  volumeClaimTemplates:
  - metadata:
      name: ebs-claim
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: ebs-sc
      resources:
        requests:
          storage: 1024Gi
  template:
...
Joey Yi Zhao
  • 37,514
  • 71
  • 268
  • 523
  • `kubectl delete -f spec.yaml` deletes all resources in `spec.yaml` - if you don't want that, you need to run a different command or use a different manifest file. – Jonas Sep 22 '21 at 16:02

1 Answers1

2

This is because the pod got scheduled on a worker node that resides in the different available zone than the previously created PV. Can't really solve it here as you didn't post the StorageClass (ebs-sc) spec, description of the PV. But you can see here which explained the same.

gohm'c
  • 13,492
  • 1
  • 9
  • 16