1

I have provisioned NFS over DigitalOcean block storage to have readwritemany access mode, now i am able to share PV between deployments, but i am unable to share it within the deployment when i have multiple mount paths with same claim name. Can someone kindly comment why this is happening, is it the right way to use PV, and if NFS doesnt support this what else can i use that will enable me to share volumes between pods with multiple mount paths with.

Manifest

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-data
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 18Gi
  storageClassName: nfs

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web
  name: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        resources: {}
        volumeMounts:
        - mountPath: /data
          name: data
        - mountPath: /beta
          name: beta
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nfs-data
      - name: beta
        persistentVolumeClaim:
          claimName: nfs-data

PV DESCRIPTION

NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
data-nfs-server-nfs-server-provisioner-0   Bound    pvc-442af801-0b76-444d-afea-382a12380926   20Gi       RWO            do-block-storage   24h
nfs-data                                   Bound    pvc-0ae84fe2-025b-450d-8973-b74c80275cb7   18Gi       RWX            nfs                1h


Name:          nfs-data
Namespace:     default
StorageClass:  nfs
Status:        Bound
Volume:        pvc-0ae84fe2-025b-450d-8973-b74c80275cb7
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-server-nfs-server-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      18Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason                 Age                    From                                                                                                                      Message
  ----    ------                 ----                   ----                                                                                                                      -------
  Normal  ExternalProvisioning   2m16s (x2 over 2m16s)  persistentvolume-controller                                                                                               waiting for a volume to be created, either by external provisioner "cluster.local/nfs-server-nfs-server-provisioner" or manually created by system administrator
  Normal  Provisioning           2m16s                  cluster.local/nfs-server-nfs-server-provisioner_nfs-server-nfs-server-provisioner-0_8dd7b303-b9a1-4a07-8c6b-906b81c07402  External provisioner is provisioning volume for claim "default/nfs-data"
  Normal  ProvisioningSucceeded  2m16s                  cluster.local/nfs-server-nfs-server-provisioner_nfs-server-nfs-server-provisioner-0_8dd7b303-b9a1-4a07-8c6b-906b81c07402  Successfully provisioned volume pvc-0ae84fe2-025b-450d-8973-b74c80275cb7

ERROR

Name:           web-85f9fbf54-hfcvn
Namespace:      default
Priority:       0
Node:           pool-db4v93z2h-3yg9e/10.132.113.175
Start Time:     Thu, 25 Jun 2020 19:25:40 +0500
Labels:         app=web
                pod-template-hash=85f9fbf54
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/web-85f9fbf54
Containers:
  nginx:
    Container ID:   
    Image:          nginx:latest
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /beta from beta (rw)
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pdsgk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs-data
    ReadOnly:   false
  beta:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nfs-data
    ReadOnly:   false
  default-token-pdsgk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-pdsgk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age        From                           Message
  ----     ------       ----       ----                           -------
  Normal   Scheduled    <unknown>  default-scheduler              Successfully assigned default/web-85f9fbf54-hfcvn to pool-db4v93z2h-3yg9e
  Warning  FailedMount  22s        kubelet, pool-db4v93z2h-3yg9e  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[default-token-pdsgk data beta]: timed out waiting for the condition
Talha Latif
  • 123
  • 3
  • 13
  • Could you take a look at this [stackoverflow question](https://stackoverflow.com/questions/35443649)? It´s mentioned here that it might not work because `the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/.` Additionally there is another [answer](https://stackoverflow.com/a/52502771/11977760) with example, but they use 2 pvc instead of 1. Could you try this and let me know if that worked for you? – Jakub Jun 26 '20 at 07:38
  • @jt97 i have seen both of these question, the issue here is that the mount path i am using are different they dont have same root, this concept works well, that is when i have 2 deployments with 1 mount path each using the same claim name everything works fine, but as soon as i have multiple mount paths within the same deployment it fails. i believe the only solution is separate pvc but then that would mean that if i have 4 mount paths within one deployment i would need 4 different pvc – Talha Latif Jun 26 '20 at 09:53
  • One more idea which came to my mind is subpath, take a look at this example from [documentation](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath). If you want to try it you would have to use 1path, for example /data/1 and /data/2. If that won´t work I would say you have to use different pvc for each volumemounts path as you mentioned, and that´s the only way I found, take a look at example 2.3.1 [here](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_provisioning_storage_in_kubernetes). – Jakub Jun 26 '20 at 11:25

1 Answers1

2

As I mentioned in comments you could try to use subPath, take a look at kubernetes and openshift documentation about it.

Sometimes, it is useful to share one volume for multiple uses in a single Pod. The volumeMounts.subPath property can be used to specify a sub-path inside the referenced volume instead of its root.

Here is an example of a Pod with a LAMP stack (Linux Apache Mysql PHP) using a single, shared volume. The HTML contents are mapped to its html folder, and the databases will be stored in its mysql folder:

apiVersion: v1
kind: Pod
metadata:
  name: my-lamp-site
spec:
    containers:
    - name: mysql
      image: mysql
      env:
      - name: MYSQL_ROOT_PASSWORD
        value: "rootpasswd"
      volumeMounts:
      - mountPath: /var/lib/mysql
        name: site-data
        subPath: mysql
    - name: php
      image: php:7.0-apache
      volumeMounts:
      - mountPath: /var/www/html
        name: site-data
        subPath: html
    volumes:
    - name: site-data
      persistentVolumeClaim:
        claimName: my-lamp-site-data

Databases are stored in the mysql folder.

HTML content is stored in the html folder.


If that won´t work for you I would say you have to use pvc for every mount path.

Like for example here.

apiVersion: v1
kind: Pod
metadata:
  name: nfs-web
spec:
  volumes:
    # List of volumes to use, i.e. *what* to mount
    - name: myvolume
      < volume details, see below >
    - name: mysecondvolume
      < volume details, see below >

  containers:
    - name: mycontainer
      volumeMounts:
        # List of mount directories, i.e. *where* to mount
        # We want to mount 'myvolume' into /usr/share/nginx/html
        - name: myvolume
          mountPath: /usr/share/nginx/html/
        # We want to mount 'mysecondvolume' into /var/log
        - name: mysecondvolume
          mountPath: /var/log/
Jakub
  • 8,189
  • 1
  • 17
  • 31