2

I am trying to use Microk8s storage addon but my PVC and pod are stuck at pending and I don't know what is wrong. I am also using the "registry" addon which uses the storage and that one works without a problem.

FYI: I already restarted the microk8s multiple times and even totally deleted and reinstalled it but the problem remained.

Yaml files:

# =================== pvc.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: wws-registry-claim
  spec:
    volumeName: registry-pvc
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
    storageClassName: microk8s-hostpath

# =================== deployment.yaml (just spec section)
spec:
  serviceName: registry
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: registry
  template:
    metadata:
      labels:
        io.kompose.service: registry
    spec:
      containers:
      - image: {{ .Values.image }}
        name: registry-master
        ports:
        - containerPort: 28015
        - containerPort: 29015
        - containerPort: 8080
        resources:
          requests:
            cpu: {{ .Values.request_cpu }}
            memory: {{ .Values.request_memory }}
          limits:
            cpu: {{ .Values.limit_cpu }}
            memory: {{ .Values.limit_memory }}
        volumeMounts:
        - mountPath: /data
          name: rdb-local-data
        env:
        - name: RUN_ENV
          value: 'kubernetes'
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
      volumes:
      - name: rdb-local-data
        persistentVolumeClaim:
          claimName: wws-registry-claim

Cluster info:

$ kubectl get pvc -A
NAMESPACE            NAME                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
container-registry   registry-claim       Bound     pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca   20Gi       RWX            microk8s-hostpath   56m
default              wws-registry-claim   Pending   registry-pvc                               0                         microk8s-hostpath   23m


$ kubectl get pv -A
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS        REASON   AGE
pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca   20Gi       RWX            Delete           Bound    container-registry/registry-claim   microk8s-hostpath            56m


$ kubectl get pods -n kube-system 
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-9b8997588-vk5vt                 1/1     Running   0          57m
hostpath-provisioner-7b9cb5cdb4-wxcp6   1/1     Running   0          57m
metrics-server-v0.2.1-598c8978c-74krr   2/2     Running   0          57m
tiller-deploy-77855d9dcf-4cvsv          1/1     Running   0          46m


$ kubectl -n kube-system logs hostpath-provisioner-7b9cb5cdb4-wxcp6 
I0322 12:31:31.231110       1 controller.go:293] Starting provisioner controller 87fc12df-8b0a-11eb-b910-ee8a00c41384!
I0322 12:31:31.231963       1 controller.go:893] scheduleOperation[lock-provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.235618       1 leaderelection.go:154] attempting to acquire leader lease...
I0322 12:31:31.237785       1 leaderelection.go:176] successfully acquired lease to provision for pvc container-registry/registry-claim
I0322 12:31:31.237841       1 controller.go:893] scheduleOperation[provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.239011       1 hostpath-provisioner.go:86] creating backing directory: /var/snap/microk8s/common/default-storage/container-registry-registry-claim-pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca
I0322 12:31:31.239102       1 controller.go:627] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" created
I0322 12:31:31.244798       1 controller.go:644] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" saved
I0322 12:31:31.244813       1 controller.go:680] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" provisioned for claim "container-registry/registry-claim"
I0322 12:31:33.243345       1 leaderelection.go:196] stopped trying to renew lease to provision for pvc container-registry/registry-claim, task succeeded


$ kubectl get sc
NAME                PROVISIONER            AGE
microk8s-hostpath   microk8s.io/hostpath   169m


$ kubectl get sc -o yaml
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"microk8s-hostpath"},"provisioner":"microk8s.io/hostpath"}
    creationTimestamp: "2021-03-22T12:31:25Z"
    name: microk8s-hostpath
    resourceVersion: "2845"
    selfLink: /apis/storage.k8s.io/v1/storageclasses/microk8s-hostpath
    uid: e94b5653-e261-4e1f-b646-e272e0c8c493
  provisioner: microk8s.io/hostpath
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Microk8s inspect:

$ microk8s.inspect 
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-flanneld is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Service snap.microk8s.daemon-etcd is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster

WARNING:  Docker is installed. 
Add the following lines to /etc/docker/daemon.json: 
{
    "insecure-registries" : ["localhost:32000"] 
}
and then restart docker with: sudo systemctl restart docker
Building the report tarball
  Report tarball is at /var/snap/microk8s/1671/inspection-report-20210322_143034.tar.gz
AVarf
  • 4,481
  • 9
  • 47
  • 74
  • what is o/p of `kubectl get sc` output ? – P.... Mar 22 '21 at 15:19
  • I updated my question and added the output at the end of "cluster info" section. – AVarf Mar 22 '21 at 15:23
  • probably, in your `pvc` storage class used is `storageClassName: mk8s-sc` but `k get sc` only show `microk8s-hostpath.why this mismatch ? – P.... Mar 22 '21 at 15:44
  • Thanks for noticing the mismatch, I changed it, deleted everything and relaunched them but it is still pending with no error or new log in `hostpath-provisioner`. The reason for mismatch was: at first I created my own `sc` but later I found out that when I enable the `storage` addson Microk8s will create one for me. – AVarf Mar 22 '21 at 15:58
  • did you check https://stackoverflow.com/a/60213860/6309601 and https://stackoverflow.com/a/55982035/6309601 – P.... Mar 22 '21 at 17:54
  • The storage addson was enabled from the beginning. The other link is for a PV which based on Microk8s docs we don't need to create and the hostpath-provisioner will create that dynamically and when we create a PVC – AVarf Mar 23 '21 at 08:22

1 Answers1

2

I found the problem. Since the "host-provisioner" takes care of creating PV we should not pass the volumeName in our PVC yaml file. When I removed that field the provisioner could make a PV and bound my PVC to it and now my pod has started.

Now my PVC is:

apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    name: wws-registry-claim
  spec:
    # volumeName: registry-pvc
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 1Gi
    storageClassName: microk8s-hostpath
AVarf
  • 4,481
  • 9
  • 47
  • 74