I'm trying to create a new Kubernetes deployment that will allow me to persist a pod's state when it is restarted or shutdown. Just for some background, the Kubernetes instance is a managed Amazon EKS Cluster, and I am trying to incorporate an Amazon EFS-backed Persistent Volume that is mounted to the pod.
Unfortunately as I have it now, the PV mounts to /etc/
as desired, but the contents are nearly empty, except for some files that were modified during boot.
The deployment yaml looks as below:
kind: Deployment
apiVersion: apps/v1
spec:
replicas: 1
selector:
matchLabels:
app: testpod
template:
metadata:
creationTimestamp: null
labels:
app: testpod
spec:
volumes:
- name: efs
persistentVolumeClaim:
claimName: efs
containers:
- name: testpod
image: 'xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/testpod:latest'
args:
- /bin/init
ports:
- containerPort: 443
protocol: TCP
resources: {}
volumeMounts:
- name: efs
mountPath: /etc
subPath: etc
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- ALL
restartPolicy: Always
terminationGracePeriodSeconds: 60
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
Any ideas of what could be going wrong? I would expect /etc/ to be populated with the contents of the image.
Edit:
This seems to be working fine in Docker by using the same image, creating a volume with docker volume create <name>
and then mounting it as -v <name>:/etc
.