1

I have created a persistent volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "C:/Users/xxx/Desktop/pv"

And want to make save mysql statefulset pods things on it. So, I wrote the volumeclaimtemplate:

  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 1Gi

Thinking this would request the persistent storage from the only persistent volume I have. Instead, this is what happens: persistent volumes

1 Answers1

1

StatefulSets requires you to use storage classes in order to bind the correct PVs with the correct PVCs.

The correct way to make StatefulSets mount local storage is by using local type of volumes, take a look at the procedure below.


First, you create a storage class for the local volumes. Something like the following:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

It has no-provisioner so it will not be able to automatically provision PVs, you'll need to create them manually, but that's exactly what you want for local storage.

Second, you create your local PV, something as the following:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: "C:/Users/xxx/Desktop/pv"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - the-node-hostname-on-which-the-storage-is-located

This definition tells the local path on the node, but also forces the PV to be used on a specific node (which match the nodeSelectorTerms).

It also links this PV to the storage class created earlier. This means that now, if a StatefulSets requires a storage with that storage class, it will receive this disk (if the space required is less or equal, of course)

Third, you can now link the StatefulSet:

volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "local-storage"
      resources:
        requests:
          storage: 5Gi

When the StatefulSet Pod will need to be scheduled for the first time, the following will happen:

  • A PVC will be created and it will go Bound with the PV you just created
  • The Pod will be scheduled to run on the node on which the bounded PV is restricted to run

UPDATE:

In case you want to use hostPath storage instead of local storage (because for example you are on minikube and that is supported out of the box directly, so it's more easy) you need to change the PV declaration a bit, something like the following:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 5Gi
  hostPath:
    path: /data/pv0001/

Now, the /data directory and all its content is persisted on the host (so if minikube gets restarted, it's still there) but if you want to mount specific directories of your host, you need to use minikube mount, for example:

minikube mount <source directory>:<target directory>

For example, you could do:

minikube mount C:/Users/xxx/Desktop/pv:/host/my-special-pv

and then you could use /host/my-special-pv as the hostPath inside the PV declaration.

More info can be read in the docs.

AndD
  • 2,281
  • 1
  • 8
  • 22
  • Thank you very much! Just one question, what is the-node-hostname-on-which-the-storage-is-located exactly? Could you make an example? –  Jan 25 '22 at 11:41
  • Oh, the storage is located on a specific node of the cluster, so you just need to put the Hostname of that node there. It's usually its name if you do a k get nodes – AndD Jan 25 '22 at 11:44
  • Ok thank you a lot! Maybe I should have been more specific, I am using minikube. So I put minikube in the-node-hostname since it was the only output I got from k get nodes, but kubernetes seems not to find my local path when initializing the pv: `MountVolume.NewMounter initialization failed for volume "pv-volume" : path "C:/Users/xxx/Desktop/pv" does not exist ` –  Jan 25 '22 at 12:11
  • @gijoyah I updated the answer to explain how it changes if you run on minikube. The easiest solution is of course to save your PVs directly in subpaths of /data directory because those should be persisted out of the box. If not, you need to mount your desired paths to minikube and then you can use them as hostPaths at the paths you mounted them. – AndD Jan 25 '22 at 12:25
  • I am really sorry to bother you again, but I tried saving exactly like you did (in the data/pv0001 directory) but kubernetes tells me `0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. Error: stat /data/pv0001/: no such file or directory`. Should I be able to see that directory anywhere inside my local machine? –  Jan 25 '22 at 13:11
  • I think you just need to create that directory inside the VM with a mkdir /data/pv0001. Then it should be persisted between starts and stops of minikube (but I would double check it to be sure). And, I think that it would just be saved in a .vmdk file so I don't think you'll find the directory inside the local machine, you will just have the minikube disk file, probably inside your home-dir/.minikube/machines/minikube or something similar to that. If you want to have a directory, I think you need to use minikube mount commands. – AndD Jan 25 '22 at 13:20