15

Currently i try to implement PersistentVolume in my yaml file . I read a lot of documentation on internet and i dont understand why when i go to the dashboard pod i've this message

persistentvolumeclaim "karaf-conf" not found

pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: karafpod
spec:
  containers:
  - name: karaf
    image: xxx/karaf:ids-1.1.0
    volumeMounts:
    - name: karaf-conf-storage
      mountPath: "/apps/karaf/etc"
  volumes:
    - name: karaf-conf-storage
      persistentVolumeClaim:
        claimName: karaf-conf-claim

PersistentVolumeClaimKaraf.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: karaf-conf-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi

PersistentVolume.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: karaf-conf
  labels:
    type: local
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/apps/karaf/etc"

You will find below the result of the command kubectl get pv

NAME                             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                         STORAGECLASS   REASON    AGE
karaf-conf                    100Mi      RWO            Retain           Terminating   default/karaf-conf-claim                            17h
karaf-conf-persistentvolume   100Mi      RWO            Retain           Released      default/karaf-conf                                  1h

kubectl get pvc

NAME                  STATUS        VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
karaf-conf-claim   Terminating   karaf-conf   10Mi       RWO            manual         17h
morla
  • 675
  • 4
  • 15
  • 32
  • Can you expand question with output of `kubectl get pv karaf-conf-persistentvolume -o yaml` ? – Const Jun 27 '18 at 10:57

4 Answers4

4

With hostPath, you don't need PersistentVolume or PersistentVolumeClaim objects, so this might be easier depending on your need:

# file: pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: karafpod
spec:
  containers:
  - name: karaf
    image: xxx/karaf:ids-1.1.0
    volumeMounts:
    - name: karaf-conf-storage
      mountPath: "/apps/karaf/etc"  # Path mounted in container

  # Use hostPath here
  volumes:
    - name: karaf-conf-storage
      hostPath:
        path: "/apps/karaf/etc" # Path from the host

Then delete the other two .yaml files PersistentVolumeClaimKaraf.yml and PersistentVolume.yml

For official documentation, see: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath

Edit: Noticed that spec.containers.VolumeMounts.mountPath and spec.containers.volumes.hostPath.path from the original post were the same, so added comments in yaml to clarify purpose of each.

TIH
  • 76
  • 1
  • 3
  • That's much easier than what is described here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/ – eeijlar Nov 17 '20 at 12:10
  • `HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as ReadOnly.` – Ri1a Jun 19 '22 at 17:35
1

I think that root cause of your issue is related with Terminating state.

As fast fix for this issue, you should create new PV and PVC (with different name than those in Terminating state.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: karaf-conf-new
  labels:
    type: local
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/apps/karaf/etc"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: karaf-conf-newclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi
  

Edit your Pod yaml to use new claimName:

  volumes:
    - name: karaf-conf-storage
      persistentVolumeClaim:
        claimName: karaf-conf-newclaim

When PersistentVolumeClaim is in 'Terminating' state, it suggests that you deleted a PVC being in active use by some Pod. The thing is that you find your self now in dead-lock state, in other words your 'karafpod' Pod won't start up unless a referenced PVC is in Bound state.

From your outputs, I can see that there is karaf-conf-persistentvolume PV which was bouned to PVC: karaf-conf. I would guess that you have tried to delete PVCs.

As your PersistentVolumes have ReclaimPolicy set to Retain, PVC: karaf-conf was removed without issues as it was not used by any Pods and due to that policy, PV: karaf-conf-persistentvolume was kept.

However, your pod: karafpod claimed PVC: karaf-conf-claim which was bounded to PV: karaf-conf. As this pod was running, PVC and PV couldn't be removed.

Fix if you would like to keep all names the same.

  1. Delete pod: karafpod, you can use --grace-period to fore it. kubectl delete pod <PODNAME> --grace-period=0 --force
  2. DeletePVC: karaf-conf-claim and PV: karaf-conf.
  3. Check if PV and PVC was removed. kubectl get pv,pvc

You can also check for Pods that are actively using PVC. It can be achieved using command from this thread

kubectl get pods --all-namespaces -o=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec |  select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
  1. Create PVC: karaf-conf-claim and PV: karaf-conf
  2. Deploy pod: karafpod
PjoterS
  • 12,841
  • 1
  • 22
  • 54
0

pv/karaf-conf is in terminating status, try to delete it and recreate it using type: DirectoryOrCreate.

kind: PersistentVolume
apiVersion: v1
metadata:
  name: karaf-conf
  labels:
    type: local
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/apps/karaf/etc"
    type: DirectoryOrCreate
jmselmi
  • 96
  • 1
  • 1
-1

My suggestion is recreating pv and pvc, and you ensure you are running the pod on the node host that configured the hostPath.

Nathaniel Ford
  • 20,545
  • 20
  • 91
  • 102
Daein Park
  • 4,393
  • 2
  • 12
  • 21
  • When i try to delete the pv or pvc with **kubectl delete pvc/XXX** it say persistentvolume "karaf-conf" deleted. but when i run **kubectl delete pvc/XXX** it's steel here :( – morla Jun 27 '18 at 11:55