I have an EKS cluster running a StatefulSet using EC2 nodes and EBS volumes for the storageclass. I want to move a pod of the StatefulSet from node 1 to node 2. I drain node 1 like so:
kubectl drain --ignore-daemonsets --delete-emptydir-data node1
The problem is the pod doesn't come up on node 2, because the PV has been created in us-east-1a and can't be attached to node 2 which is in us-east-1b (cross-zone issue described here: https://stackoverflow.com/a/55514852/1259990).
When I describe the pod, I get the following scheduling error:
1 node(s) had volume node affinity conflict
I'm wondering if I can recreate the PV in us-east-1b without having to delete/redeploy the StatefulSet. If I were to delete the PV from my cluster (and possibly the PVC as well):
kubectl delete pv pv-in-us-east-1a
Would the StatefulSet recreate the PV in the correct zone, if node2 is the only schedulable node? If not, is there another way to accomplish this without deleting/recreating the full StatefulSet? The data on the PV is not important and doesn't need to be saved.
(I would just try to delete the PV, but I don't actually want to bring down this particular service if the PV doesn't get recreated.)