25

I have an application running over a POD in Kubernetes. I would like to store some output file logs on a persistent storage volume.

In order to do that, I created a volume over the NFS and bound it to the POD through the related volume claim. When I try to write or accede the shared folder I got a "permission denied" message, since the NFS is apparently read-only.

The following is the json file I used to create the volume:

{
      "kind": "PersistentVolume",
      "apiVersion": "v1",
      "metadata": {
        "name": "task-pv-test"
      },
      "spec": {
        "capacity": {
          "storage": "10Gi"
        },
        "nfs": {
          "server": <IPAddress>,
          "path": "/export"
        },
        "accessModes": [
          "ReadWriteMany"
        ],
        "persistentVolumeReclaimPolicy": "Delete",
        "storageClassName": "standard"
      }
    }

The following is the POD configuration file

kind: Pod
apiVersion: v1
metadata:
    name: volume-test
spec:
    volumes:
        -   name: task-pv-test-storage
            persistentVolumeClaim:
                claimName: task-pv-test-claim
    containers:
        -   name: volume-test
            image: <ImageName>
            volumeMounts:
            -   mountPath: /home
                name: task-pv-test-storage
                readOnly: false

Is there a way to change permissions?


UPDATE

Here are the PVC and NFS config:

PVC:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pv-test-claim
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi

NFS CONFIG

{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "nfs-client-provisioner-557b575fbc-hkzfp",
    "generateName": "nfs-client-provisioner-557b575fbc-",
    "namespace": "default",
    "selfLink": "/api/v1/namespaces/default/pods/nfs-client-provisioner-557b575fbc-hkzfp",
    "uid": "918b1220-423a-11e8-8c62-8aaf7effe4a0",
    "resourceVersion": "27228",
    "creationTimestamp": "2018-04-17T12:26:35Z",
    "labels": {
      "app": "nfs-client-provisioner",
      "pod-template-hash": "1136131967"
    },
    "ownerReferences": [
      {
        "apiVersion": "extensions/v1beta1",
        "kind": "ReplicaSet",
        "name": "nfs-client-provisioner-557b575fbc",
        "uid": "3239b14a-4222-11e8-8c62-8aaf7effe4a0",
        "controller": true,
        "blockOwnerDeletion": true
      }
    ]
  },
  "spec": {
    "volumes": [
      {
        "name": "nfs-client-root",
        "nfs": {
          "server": <IPAddress>,
          "path": "/Kubernetes"
        }
      },
      {
        "name": "nfs-client-provisioner-token-fdd2c",
        "secret": {
          "secretName": "nfs-client-provisioner-token-fdd2c",
          "defaultMode": 420
        }
      }
    ],
    "containers": [
      {
        "name": "nfs-client-provisioner",
        "image": "quay.io/external_storage/nfs-client-provisioner:latest",
        "env": [
          {
            "name": "PROVISIONER_NAME",
            "value": "<IPAddress>/Kubernetes"
          },
          {
            "name": "NFS_SERVER",
            "value": <IPAddress>
          },
          {
            "name": "NFS_PATH",
            "value": "/Kubernetes"
          }
        ],
        "resources": {},
        "volumeMounts": [
          {
            "name": "nfs-client-root",
            "mountPath": "/persistentvolumes"
          },
          {
            "name": "nfs-client-provisioner-token-fdd2c",
            "readOnly": true,
            "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
          }
        ],
        "terminationMessagePath": "/dev/termination-log",
        "terminationMessagePolicy": "File",
        "imagePullPolicy": "Always"
      }
    ],
    "restartPolicy": "Always",
    "terminationGracePeriodSeconds": 30,
    "dnsPolicy": "ClusterFirst",
    "serviceAccountName": "nfs-client-provisioner",
    "serviceAccount": "nfs-client-provisioner",
    "nodeName": "det-vkube-s02",
    "securityContext": {},
    "schedulerName": "default-scheduler",
    "tolerations": [
      {
        "key": "node.kubernetes.io/not-ready",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      },
      {
        "key": "node.kubernetes.io/unreachable",
        "operator": "Exists",
        "effect": "NoExecute",
        "tolerationSeconds": 300
      }
    ]
  },
  "status": {
    "phase": "Running",
    "hostIP": <IPAddress>,
    "podIP": "<IPAddress>,
    "startTime": "2018-04-17T12:26:35Z",
    "qosClass": "BestEffort"
  }
}

I have just removed some status information from the nfs config to make it shorter

fragae
  • 285
  • 1
  • 3
  • 9

5 Answers5

36

If you set the proper securityContext for the pod configuration you can make sure the volume is mounted with proper permissions.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  securityContext:
    fsGroup: 2000 
  volumes:
    - name: task-pv-test-storage
      persistentVolumeClaim:
        claimName: task-pv-test-claim
  containers:
  - name: demo
    image: example-image
    volumeMounts:
    - name: task-pv-test-storage
      mountPath: /data/demo

In the above example the storage will be mounted at /data/demo with 2000 group id, which is set by fsGroup. By setting the fsGroup all processes of the container will also be part of the supplementary group ID 2000, thus you should have access to the mounted files.

You can read more about pod security context here: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

MrBlaise
  • 970
  • 7
  • 21
  • 1
    that example wont use nfs. So there /data/demo has 2000 gid. But, if we change the PV to NFS, there also we are getting permission error. – lokanadham100 Jun 14 '18 at 09:41
  • 1
    Tried it also with NFS and it didn't work with fsGroup. Probably because of this issue https://github.com/kubernetes/examples/issues/260 – Kutzi Jan 08 '19 at 15:21
  • Why do you need to find of the users. The docs clearly states: ...Since fsGroup field is specified, all processes of the container are also part of the supplementary group ID 2000. The owner for volume /data/demo and any files created in that volume will be Group ID 2000. – yuranos Nov 29 '20 at 20:49
  • You are right, I have updated the answer. – MrBlaise Feb 13 '21 at 13:17
  • idk man there seem to be evidence that `fsGroup` doesn't work for NFS, see this GitHub issue: https://github.com/kubernetes/examples/issues/260 – Elouan Keryell-Even Nov 04 '21 at 14:58
19

Thanks to 白栋天 for the tip. For instance, if the pod securityContext is set to:

securityContext:
  runAsUser: 1000
  fsGroup: 1000

you would ssh to the NFS host and run

chown 1000:1000 -R /some/nfs/path

If you do not know the user:group or many pods will mount it, you can run

chmod 777 -R /some/nfs/path
AlaskaJoslin
  • 760
  • 8
  • 14
  • 3
    From a security perspective, I am not sure chmod 777 is a good approach - BUT it was the solution for me at least (after many frustrating hours). The funny thing though is, that with dynamic/managed provisioning (https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client) this is not an issue at all. Anyway, thank you for the proposal, it will suffice for my homelab :-) – gimlichael Mar 01 '20 at 11:11
  • 2
    @gimlichael It seems that the dynamic provisoner does exactly this, chmod 777: https://github.com/kubernetes-incubator/external-storage/blob/master/nfs-client/cmd/nfs-client-provisioner/provisioner.go#L72 – Philipp Nowak Mar 19 '20 at 02:24
  • @gimlichael if you set the "runAsUser: 1000" like in the example above then chmod 755 should work. If you *only* set "fsGroup: 1000" without also setting the user like in the previous answers then you'll at least need 770 (or 775) since the user will only run with the specified GID. – Sebastien Martin Oct 31 '22 at 01:33
2

A simple way is to get to the nfs storage, and chmod 777, or chown with the user id in your volume-test container

白栋天
  • 152
  • 2
  • 11
  • I tried to change the owner using the user id from the volume-test container config file, but I got an invalid user message. The id looks like: "uid": "923ca461-4ec9-11e8-8ab3-8aaf7effe4a0". Is that the right one? – fragae May 04 '18 at 09:29
  • the user id is determined by the USER which exist in the end of dockerfile, default is set to 0(root), if u dont know the user id (which could be get by execute "id" in container), then just use chmod +R 777 – 白栋天 May 04 '18 at 09:40
  • I'm not sure why anyone downvoted this. This question is specific to NFS and apparently as pointed out above the NFS host needs to have the permissions set as Kubernetes cannot manage the NFS host's permissions. – AlaskaJoslin Nov 07 '18 at 08:59
0

I'm a little confused from how you're trying to get things done, in any case if I'm understanding you correctly try this example:

  volumeClaimTemplates:
  - metadata:
      name: data
      namespace: kube-system
      labels:
        k8s-app: something
        monitoring: something
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi

And then maybe an init container do do something:

initContainers:
        - name: prometheus-init
          image: /something/bash-alpine:1.5
          command:
            - chown
            - -R
            - 65534:65534
            - /data
          volumeMounts:
            - name: data
              mountPath: /data

or is it the volumeMounts you're missing out on:

volumeMounts:
            - name: config-volume
              mountPath: /etc/config
            - name: data
              mountPath: /data

My last comment would be to take note on containers, I think you're only allowed to write in /tmp or was it just for CoreOS? I'd have to look that up.

Naim Salameh
  • 387
  • 4
  • 18
-9

Have you checked the permissions of directory ? Make sure read access is available to all.

sairam546
  • 73
  • 3