0

I have the following deployment.yaml which i am just trying to test if the hostpath is working as expected or not.

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
    - image: ubuntu
      command:
        - sleep
        - "3600"
      imagePullPolicy: IfNotPresent
      name: busybox
      volumeMounts:
        - mountPath: /home/data/
          name: shared-vol-extensions
  volumes:
    - name: shared-vol-extensions
      persistentVolumeClaim:
        claimName: shared-vol-pvc
  restartPolicy: Always
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: shared-vol-pv
  namespace: default
spec:
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Retain
  storageClassName: manual
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /home/Documents/
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: shared-vol-pvc
  namespace: default
spec:
  storageClassName: manual
  volumeName: shared-vol-pv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

however, upon running, when i exec -it into the pod, navigate to /home/data/ ls does not yield any results.

Even when i touch testFile within the pod, the directory on my computer does not show the file being created.

I am running docker-desktop (kubernetes version 1.25.4, Docker Desktop 4.17.0 (99724)) on ubuntu .

Any ideas why this is the case?

update with additional info:

>>> kubectl get pv shared-vol-pv
Name:            shared-vol-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    manual
Status:          Bound
Claim:           default/shared-vol-pvc
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        100Mi
Node Affinity:   <none>
Message:         
Source:
    Type:          HostPath (bare host directory volume)
    Path:          /home/Documents/
    HostPathType:  
Events:            <none>
>>> kubectl describe pod busybox
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             docker-desktop/192.168.65.4
Start Time:       Fri, 24 Mar 2023 00:34:22 +0800
Labels:           <none>
Annotations:      <none>
Status:           Running
IP:               10.1.1.1
IPs:
  IP:  10.1.1.1
Containers:
  busybox:
    Container ID:  docker://c1441be57becbf00d028f761b35606e49694a5c944c3f3735ddac8dc3e59447a
    Image:         ubuntu
    Image ID:      docker-pullable://ubuntu@sha256:67211c14fa74f070d27cc59d69a7fa9aeff8e28ea118ef3babc295a0428a6d21
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      3600
    State:          Running
      Started:      Fri, 24 Mar 2023 21:40:56 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Fri, 24 Mar 2023 00:34:23 +0800
      Finished:     Fri, 24 Mar 2023 21:40:44 +0800
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /home/data/ from shared-vol-extensions (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4s5jp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  shared-vol-extensions:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  shared-vol-pvc
    ReadOnly:   false
  kube-api-access-4s5jp:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  21h   default-scheduler  0/1 nodes are available: 1 persistentvolumeclaim "shared-vol-pvc" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
  Normal   Scheduled         21h   default-scheduler  Successfully assigned default/busybox to docker-desktop
  Normal   Pulled            21h   kubelet            Container image "ubuntu" already present on machine
  Normal   Created           21h   kubelet            Created container busybox
  Normal   Started           21h   kubelet            Started container busybox
  Normal   SandboxChanged    4m5s  kubelet            Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled            4m4s  kubelet            Container image "ubuntu" already present on machine
  Normal   Created           4m4s  kubelet            Created container busybox
  Normal   Started           4m4s  kubelet            Started container busybox

jake wong
  • 4,909
  • 12
  • 42
  • 85
  • yaml looks good. run `kubectl describe pod busybox` for the events and `kubectl get pv shared-vol-pv` for the PersistentVolume status. Edit the question for its results – Siegfred V. Mar 23 '23 at 23:20
  • In general a `hostPath` volume will get a directory on a somewhat arbitrary node; I'd almost always avoid that volume type, except around some specialized cases of DaemonSets. Can you delete the PersistentVolume altogether and let the cluster's volume provisioner create the storage for you? Do you actually want a StatefulSet and not a bare Pod? – David Maze Mar 24 '23 at 00:12
  • @SiegfredV. i've updated with the additional information. – jake wong Mar 24 '23 at 13:47
  • @DavidMaze thanks for this - i'm just trying to do some local tests before proceeding further. The tests currently includes working with 2 pods and moving / rewriting data across them with a shared vol. Thus, i'm testing it out now with `hostpath` but will be using something like `glusterfs` instead when i'm successful with the tests. :) – jake wong Mar 24 '23 at 13:48
  • Check this [post](https://stackoverflow.com/a/75204156/19371698) where it is suggested to use the "hostpath" as the storageClassName – Siegfred V. Mar 27 '23 at 22:27
  • Tried that. does not seem to work. – jake wong Mar 28 '23 at 16:45

1 Answers1

0

My suggestion is to try checking any wsl location, because the k8s and docker uses the virtual environment of a linux based system if you are using linux containers. Try checking a WSL location...!