3

I'm trying to run postgres using kubedb on minikube where I mount my data from a local directory (located on my Mac), when the pod runs the I don't get the expected behaviour, two things happen: One is obviously the mount isn't there, and second I see the error pod has unbound immediate PersistentVolumeClaims

First, here are my yaml file:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: adminvol
  namespace: demo
  labels:
    release: development
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: /Users/myusername/local_docker_poc/admin/lib/postgresql/data
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: demo
  name: adminpvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  selector:
    matchLabels:
      release: development
apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
  name: quick-postgres
  namespace: demo
spec:
  version: "10.2-v2"
  storageType: Durable
  storage:
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: 1Gi
  volumeMounts:
    - mountPath: /busy
      name: naim
      persistentVolumeClaim:
        claimName: adminpvc
  terminationPolicy: WipeOut

According to this which is reflected in the answer below, I've removed the storageClass from all my yaml files.

The describe pod looks like this:

Name:               quick-postgres-0
Namespace:          demo
Priority:           0
PriorityClassName:  <none>
Node:               minikube/10.0.2.15
Start Time:         Wed, 25 Sep 2019 22:18:44 +0300
Labels:             controller-revision-hash=quick-postgres-5d5bcc4698
                    kubedb.com/kind=Postgres
                    kubedb.com/name=quick-postgres
                    kubedb.com/role=primary
                    statefulset.kubernetes.io/pod-name=quick-postgres-0
Annotations:        <none>
Status:             Running
IP:                 172.17.0.7
Controlled By:      StatefulSet/quick-postgres
Containers:
  postgres:
    Container ID:  docker://6bd0946f8197ddf1faf7b52ad0da36810cceff4abb53447679649f1d0dba3c5c
    Image:         kubedb/postgres:10.2-v3
    Image ID:      docker-pullable://kubedb/postgres@sha256:9656942b2322a88d4117f5bfda26ee34d795cd631285d307b55f101c2f2cb8c8
    Port:          5432/TCP
    Host Port:     0/TCP
    Args:
      leader_election
      --enable-analytics=true
      --logtostderr=true
      --alsologtostderr=false
      --v=3
      --stderrthreshold=0
    State:          Running
      Started:      Wed, 25 Sep 2019 22:18:45 +0300
    Ready:          True
    Restart Count:  0
    Environment:
      APPSCODE_ANALYTICS_CLIENT_ID:  90b12fedfef2068a5f608219d5e7904a
      NAMESPACE:                     demo (v1:metadata.namespace)
      PRIMARY_HOST:                  quick-postgres
      POSTGRES_USER:                 <set to the key 'POSTGRES_USER' in secret 'quick-postgres-auth'>      Optional: false
      POSTGRES_PASSWORD:             <set to the key 'POSTGRES_PASSWORD' in secret 'quick-postgres-auth'>  Optional: false
      STANDBY:                       warm
      STREAMING:                     asynchronous
      LEASE_DURATION:                15
      RENEW_DEADLINE:                10
      RETRY_PERIOD:                  2
    Mounts:
      /dev/shm from shared-memory (rw)
      /var/pv from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from quick-postgres-token-48rkd (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-quick-postgres-0
    ReadOnly:   false
  shared-memory:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  quick-postgres-token-48rkd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  quick-postgres-token-48rkd
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  39s   default-scheduler  pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         39s   default-scheduler  Successfully assigned demo/quick-postgres-0 to minikube
  Normal   Pulled            38s   kubelet, minikube  Container image "kubedb/postgres:10.2-v3" already present on machine
  Normal   Created           38s   kubelet, minikube  Created container
  Normal   Started           38s   kubelet, minikube  Started container

I followed the official manual on how to mount a pvc here For debug, I used the same pv and pvc to mount a simple busybox container and it worked fine, that is I can see the mount with data in it:

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: demo
spec:
  containers:
    - name: busybox
      image: busybox
      command:
        - sleep
        - "3600"
      volumeMounts:
        - mountPath: /busy
          name: adminpvc
  volumes:
    - name: adminpvc
      persistentVolumeClaim:
        claimName: adminpvc

The only difference with my own pod and that of the KubeDB (which to my understanding there's a statefulset behind it) is that I kept the storageClass in the PV and PVC ! if I remove the storage class, I will see the mount point inside the container but it's empty and has no data

Naim Salameh
  • 387
  • 4
  • 18
  • I added a guess why ýour setup isn'nt working but i have to admit, that i can't test that currently and i didn't use minikube before. Let me know how it works out. – Florian Neumann Sep 26 '19 at 09:11

3 Answers3

1
Remove the storageClass-line from the PersistentVolume

In minikube try something like this :

here is the example for elasticsearch

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elasticsearch
spec:
  capacity:
    storage: 400Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/elasticsearch/"

For more details you can also check this out : pod has unbound PersistentVolumeClaims

EDIT :

check available storageclasses

kubectl get storageclass

For PV volume

kind: PersistentVolume
apiVersion: v1
metadata:
  name: postgres-pv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /data/postgres-pv

PVC file

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: postgres-pvc
  labels:
    type: local
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  volumeName: postgres-pv
Harsh Manvar
  • 27,020
  • 6
  • 48
  • 102
  • 1
    https://stackoverflow.com/users/5525824/harsh-manvar can you review my edit, I'm still stuck with this – Naim Salameh Sep 25 '19 at 22:06
  • @NaimSalameh where can you specify edit you have made ? – Harsh Manvar Sep 26 '19 at 03:29
  • 1
    https://stackoverflow.com/users/5525824/harsh-manvar I rewrote the question with new files and debug path – Naim Salameh Sep 26 '19 at 05:38
  • @NaimSalameh okay great. – Harsh Manvar Sep 26 '19 at 05:40
  • @NaimSalameh i have updated answer can you please have a look. – Harsh Manvar Sep 26 '19 at 05:43
  • 1
    stackoverflow.com/users/5525824/harsh-manvar thanks for the amazing prompt response, however when I provision the PV and PVC you provided, I'm having the same issue where a busybox pod is able to mount and see data, while the KubeDB pod is not mounting anything at all - did you try to run kubeDb? anything else I might be missing? again thanks for all the help. – Naim Salameh Sep 26 '19 at 05:58
  • @NaimSalameh it's my please have never tried kubeDb locally minikube i am not using i test on GKE. Can please check storage class one. – Harsh Manvar Sep 26 '19 at 06:01
0

You are using the custom Postgres-resource of kubedb.com/v1alpha1.

They define a custom way to handle storage. It seems like you must set the spec.storage.storageClassName-key since a

"PVC with no storageClassName is not quite the same and is treated differently by the cluster depending on whether the DefaultStorageClass admission plugin is turned on."

Now which StorageClass to choose?

Since you're using Minikube i recommend you sticking with Minikube's minikube-hostpath. You can check if it's available:

$ kubectl get storageclass
NAME                 PROVISIONER                AGE
standard (default)   k8s.io/minikube-hostpath   2m36s

It supports dynamic provisioning and is set as default StorageClass.

Try to set the spec.storage.storageClassName: minikube-hostpath and update your volumes accordingly.

Florian Neumann
  • 5,587
  • 1
  • 39
  • 48
  • https://stackoverflow.com/users/432696/florian-breisch thank! I think I'm still missing something, when I provision the PVC I get ```pending``` the describe PVC shows ```persistentvolume-controller waiting for first consumer to be created before binding``` of course I provisioned the storageClass per the example and copy pasted your PV (the nodeAffinity is a must btw even on minikube, it errors out if I remove it) – Naim Salameh Sep 26 '19 at 15:30
  • in the storgeClass, the provioner is ```kubernetes.io/no-provisioner``` should I change that to ```kubernetes.io/hostname``` or vice versa in the PV change it to ```kubernetes.io/no-provisioner```? (sry these topics are a bit advanced for me – Naim Salameh Sep 26 '19 at 15:33
  • I overread you're using a custom resource - and i can't tell much about the resources of kubedb.com. I made a guess what you could try - let me know if it works. – Florian Neumann Sep 27 '19 at 08:57
0

It is actually not possible to do what you're trying to do.

All the interest of Kubedb is to build database clusters easily, which means dedicated volumes per instances. Kubedb's operator creates volumes (PVC) on demand, and bind them to created pods.

You're defining a stack volume for a dynamic CRD, so it just can't work.

Plus, volumeMounts is not passed to the StatefulSet by the operator (for the above reason).

You have to write the StatefulSet by yourself in order to achieve your scenario.

ZedTuX
  • 2,859
  • 3
  • 28
  • 58