4

I'm trying to create a new Kubernetes deployment that will allow me to persist a pod's state when it is restarted or shutdown. Just for some background, the Kubernetes instance is a managed Amazon EKS Cluster, and I am trying to incorporate an Amazon EFS-backed Persistent Volume that is mounted to the pod.

Unfortunately as I have it now, the PV mounts to /etc/ as desired, but the contents are nearly empty, except for some files that were modified during boot.

The deployment yaml looks as below:

kind: Deployment
apiVersion: apps/v1

spec:
  replicas: 1
  selector:
    matchLabels:
      app: testpod
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: testpod
    spec:
      volumes:
        - name: efs
          persistentVolumeClaim:
            claimName: efs
      containers:
        - name: testpod
          image: 'xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/testpod:latest'
          args:
            - /bin/init
          ports:
            - containerPort: 443
              protocol: TCP
          resources: {}
          volumeMounts:
            - name: efs
              mountPath: /etc
              subPath: etc
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            capabilities:
              add:
                - ALL
      restartPolicy: Always
      terminationGracePeriodSeconds: 60
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler

Any ideas of what could be going wrong? I would expect /etc/ to be populated with the contents of the image.

Edit:

This seems to be working fine in Docker by using the same image, creating a volume with docker volume create <name> and then mounting it as -v <name>:/etc.

ChrisDevWard
  • 885
  • 2
  • 9
  • 20

3 Answers3

5

Kubernetes does not have the Docker feature that populates volumes based on the contents of the image. If you create a new volume (whether an emptyDir volume or something based on cloud storage like AWS EBS or EFS) it will start off empty, and hide whatever was in the container.

As such, you can’t mount a volume over large parts of the container; it won’t work to mount a volume over your application’s source tree, or over /etc as you show. For files in /etc in particular, a better approach would be to use a Kubernetes ConfigMap to hold specific files you want to add to that directory. (Store your config files in source control and add them as part of the deployment sequence; don’t try to persist untracked modifications to deployed files.)

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • Thanks! Unfortunately for this specific application ConfigMap won't do the trick. I'll have to find a different way around it. – ChrisDevWard Feb 26 '20 at 00:01
1

my guess would be the mounts in containers works exactly the same way as mounts in operating system.. if you mount something at /etc you simply overwrite (better word 'cover') what has been there before.. if you mount empty EFS there will be empty folder

I tried what you tried in docker and (surprise for me) it works the way you describe.. it's likely because docker volumes are simply technologically something else than kubernetes volume claims (especially backed by EFS) this explains it: Docker mount to folder overriding content tldr: if docker volume is empty files will be mirrored

I don't personally think with k8s and EFS you can achieve what you're trying to

welcomeboredom
  • 565
  • 3
  • 12
0

I think you might be interested in "nsfdsuds", potentially: it establishes an overlayfs for a Kubernetes container in which the writable, top layer of the overlayfs can be on a PersistentVolume of your choice.

https://github.com/Sha0/nsfdsuds

Shao
  • 537
  • 3
  • 7