I'm using persistent volume claim to store data in container:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Declaration in spec:
spec:
volumes:
- name: test-data-vol
persistentVolumeClaim:
claimName: test-pvc
containers:
- name: test
image: my.docker.registry/test:1.0
volumeMounts:
- mountPath: /var/data
name: test-data-vol
When I started it first time, this volume was mounted correctly. But when I Tried to update container image:
- image: my.docker.registry/test:1.0
+ image: my.docker.registry/test:1.1
This volume failed to mount to new pod:
# kubectl get pods
test-7655b79cb6-cgn5r 0/1 ContainerCreating 0 3m
test-bf6498559-42vvb 1/1 Running 0 11m
# kubectl describe test-7655b79cb6-cgn5r
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m5s default-scheduler Successfully assigned test-7655b79cb6-cgn5r to ip-*-*-*-*.us-west-2.compute.internal
Warning FailedAttachVolume 3m5s attachdetach-controller Multi-Attach error for volume "pvc-2312eb4c-c270-11e8-8d4e-065333a7774e" Volume is already exclusively attached to one node and can't be attached to another
Normal SuccessfulMountVolume 3m4s kubelet, ip-*-*-*-*.us-west-2.compute.internal MountVolume.SetUp succeeded for volume "default-token-x82km"
Warning FailedMount 62s kubelet, ip-*-*-*-*.us-west-2.compute.internal Unable to mount volumes for pod "test-7655b79cb6-cgn5r(fab0862c-d1cf-11e8-8d4e-065333a7774e)": timeout expired waiting for volumes to attach/mount for pod "test-7655b79cb6-cgn5r". list of unattached/unmounted volumes=[test-data-vol]
It seems that Kubernetes can't re-attach this volume from one container to another. How to handle it correctly? I need this data on volume to be used by new version of deployment when old version stopped.