I have the following PVC :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium-delete
resources:
requests:
storage: 50Gi
This PVC is used by two workloads :
A statefulset
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: foo
labels:
component: foo
spec:
template:
spec:
containers:
- image: foo:1.0.0
name: foo
volumeMounts:
- mountPath: /a/specific/path
name: shared
readOnly: true
volumes:
- name: shared
persistentVolumeClaim:
claimName: my-pvc
A deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: bar
spec:
replicas: 1
template:
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
component: foo
topologyKey: kubernetes.io/hostname
containers:
- image: bar:1.0.0
name: bar
volumeMounts:
- mountPath: /a/specific/path
name: shared
readOnly: true
volumes:
- name: shared
persistentVolumeClaim:
claimName: my-pvc
If Pod A and Pod B are not on the same node, volume cannot be mounted by one of the pod.
If Pod A and Pod B are referencing each other with affinity, when pod A and pod B (re)start at the same time kubelet cannot schedule them (circular dependency).
If Pod A and Pod B are referencing a specific node with affinity, what happens if the node is decommissioned when cluster scales down ?
How to ensure my foo
and bar
workload always start on the same node as they are sharing a PVC ?