1

Below is my kubernetes file and I need to do two things

  1. need to mount a folder with a file
  2. need to mount a file with startup script

I have on my local /tmp/zoo folder both the files and my zoo folder files never appear in /bitnami/zookeeper inside the pod.

The below is the updated Service,Deployment,PVC and PV

kubernetes.yaml

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kompose.service.type: nodeport
    creationTimestamp: null
    labels:
      io.kompose.service: zookeeper
    name: zookeeper
  spec:
    ports:
    - name: "2181"
      port: 2181
      targetPort: 2181
    selector:
      io.kompose.service: zookeeper
    type: NodePort
  status:
    loadBalancer: {}
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      kompose.service.type: nodeport
    creationTimestamp: null
    name: zookeeper
  spec:
    replicas: 1
    selector:
      matchLabels:
        io.kompose.service: zookeeper
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: zookeeper
      spec:
        containers:
        - image: bitnami/zookeeper:3
          name: zookeeper
          ports:
          - containerPort: 2181
          env:
          - name: ALLOW_ANONYMOUS_LOGIN
            value: "yes"
          resources: {}
          volumeMounts:
          - mountPath: /bitnami/zoo
            name: bitnamidockerzookeeper-zookeeper-data
        restartPolicy: Always
        volumes:
        - name: bitnamidockerzookeeper-zookeeper-data
          #hostPath:
            #path: /tmp/tmp1
          persistentVolumeClaim:
            claimName: bitnamidockerzookeeper-zookeeper-data
  status: {}

- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
    creationTimestamp: null
    labels:
      io.kompose.service: bitnamidockerzookeeper-zookeeper-data
      type: local
    name: bitnamidockerzookeeper-zookeeper-data
  spec:
    accessModes:
    - ReadWriteOnce
    resources:
      requests:
        storage: 100Mi
  status: {}
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    name: foo
  spec:
    storageClassName: manual
    claimRef:
      name: bitnamidockerzookeeper-zookeeper-data
    capacity:
      storage: 100Mi
    accessModes:
      - ReadWriteMany
    hostPath:
      path: /tmp/tmp1
  status: {}
kind: List
metadata: {}

  • I'd love to reproduce this on my local cluster, but it looks like the beginning of the YAML file is truncated (it shouldn't start with `metadata` like this), can you check if you have the whole file there? Thanks! – jpetazzo Oct 28 '21 at 21:01
  • done, updated it – Madhuri Devidi Oct 28 '21 at 21:31
  • Can you double check? It doesn't look like this is the correct YAML; you have `kind: Service` but then what follows is a Pod definition, and the indentation isn't consistent. I have the feeling that something went wrong in the copy paste, maybe? – jpetazzo Oct 29 '21 at 06:37
  • Sorry, i was using vi and couldn't copy properly. I have Service, Deployment, PVC and PV and all have names. – Madhuri Devidi Oct 29 '21 at 17:43

3 Answers3

1

A service cannot be assigned a volume. In line 4 of your YAML, you specify "Service" when it should be "Pod" and every resource used in Kubernetes must have a name, in metadata you could add it. That should fix the simple problem.

apiVersion: v1
items:
- apiVersion: v1
  kind: Pod  #POD
  metadata:
    name: my-pod  #A RESOURCE NEEDS A NAME
    creationTimestamp: null
    labels:
      io.kompose.service: zookeeper
  spec:
    containers:
    - image: bitnami/zookeeper:3
      name: zookeeper
      ports:
      - containerPort: 2181
      env:
      - name: ALLOW_ANONYMOUS_LOGIN
        value: "yes"
      resources: {}
      volumeMounts:
      - mountPath: /bitnami/zookeeper
        name: bitnamidockerzookeeper-zookeeper-data
    restartPolicy: Always
    volumes:
    - name: bitnamidockerzookeeper-zookeeper-data
      persistentVolumeClaim:
        claimName: bitnamidockerzookeeper-zookeeper-data
  status: {}

Now, I don't know what you're using but hostPath works exclusively on a local cluster like Minikube. In production things change drastically. If everything is local, you need to have the directory "/ tmp / zoo" in the node, NOTE not on your local pc but inside the node. For example, if you use minikube then you run minikube ssh to enter the node and there copies "/ tmp / zoo". An excellent guide to this is given in the official kubernetes documentation: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

  • 1
    `hostPath` volumes also work on cluster with multiple nodes, but the volumes are then tied to a specific node. That's how e.g. OpenEBS local PV work; or the Rancher local path provisioner. – jpetazzo Oct 29 '21 at 06:34
  • @alonso I am using Kind and not minikube. So how do I copy? Can I ssh? – Madhuri Devidi Oct 29 '21 at 17:45
  • 1
    docker ps showed me a process running kind-control-pane and I did "docker exec -it 97e4b5fe515b /bin/sh". I was then able to see all the folders I tried. Thanks Alonso Valdivia – Madhuri Devidi Oct 29 '21 at 17:54
0

Little confuse, if you want to use file path on node as volume for pod, you should do as this:

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pd
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: Directory

but you need to make sure you pod will be scheduler the same node which has the file path.

vincent pli
  • 349
  • 1
  • 6
0

There are a few potential issues in your YAML.

First, the accessModes of the PersistentVolume doesn't match the one of the PersistentVolumeClaim. One way to fix that is to list both ReadWriteMany and ReadWriteOnce in the accessModes of the PersistentVolume.

Then, the PersistentVolume doesn't specify a storageClassName. As a result, if you have a StorageClass configured to be the default StorageClass on your cluster (you can see that with kubectl get sc), it will automatically provision a PersistentVolume dynamically instead of using the PersistentVolume that you declared. So you need to specify a storageClassName. The StorageClass doesn't have to exist for real (since we're using static provisioning instead of dynamic anyway).

Next, the claimRef in PersistentVolume needs to mention the Namespace of the PersistentVolumeClaim. As a reminder: PersistentVolumes are cluster resources, so they don't have a Namespace; but PersistentVolumeClaims belong to the same Namespace as the Pod that mounts them.

Another thing is that the path used by Zookeeper data in the bitnami image is /bitnami/zookeeper, not /bitnami/zoo.

You will also need to initialize permissions in that volume, because by default, only root will have write access, and Zookeeper runs as non-root here, and won't have write access to the data subdirectory.

Here is an updated YAML that addresses all these points. I also rewrote the YAML to use the YAML multi-document syntax (resources separated by ---) instead of the kind: List syntax, and I removed a lot of fields that weren't used (like the empty status: fields and the labels that weren't strictly necessary). It works on my KinD cluster, I hope it will also work in your situation.

If your cluster has only one node, this will work fine, but if you have multiple nodes, you might need to tweak things a little bit to make sure that the volume is bound to a specific node (I added a commented out nodeAffinity section in the YAML, but you might also have to change the bind mode - I only have a one-node cluster to test it out right now; but the Kubernetes documentation and blog have abundant details on this; https://stackoverflow.com/a/69517576/580281 also has details about this binding mode thing).

One last thing: in this scenario, I think it might make more sense to use a StatefulSet. It would not make a huge difference but would more clearly indicate intent (Zookeeper is a stateful service) and in the general case (beyond local hostPath volumes) it would avoid having two Zookeeper Pods accessing the volume simultaneously.

apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
  - name: "2181"
    port: 2181
    targetPort: 2181
  selector:
    io.kompose.service: zookeeper
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: zookeeper
  template:
    metadata:
      labels:
        io.kompose.service: zookeeper
    spec:
      initContainers:
      - image: alpine
        name: chmod
        volumeMounts:
        - mountPath: /bitnami/zookeeper
          name: bitnamidockerzookeeper-zookeeper-data
        command: [ sh, -c, "chmod 777 /bitnami/zookeeper" ]
      containers:
      - image: bitnami/zookeeper:3
        name: zookeeper
        ports:
        - containerPort: 2181
        env:
        - name: ALLOW_ANONYMOUS_LOGIN
          value: "yes"
        volumeMounts:
        - mountPath: /bitnami/zookeeper
          name: bitnamidockerzookeeper-zookeeper-data
      volumes:
      - name: bitnamidockerzookeeper-zookeeper-data
        persistentVolumeClaim:
          claimName: bitnamidockerzookeeper-zookeeper-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: bitnamidockerzookeeper-zookeeper-data
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: tmp-tmp1
spec:
  storageClassName: manual
  claimRef:
    name: bitnamidockerzookeeper-zookeeper-data
    namespace: default
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
    - ReadWriteOnce
  hostPath:
    path: /tmp/tmp1
  #nodeAffinity:
  #  required:
  #    nodeSelectorTerms:
  #      - matchExpressions:
  #        - key: kubernetes.io/hostname
  #          operator: In
  #          values:
  #          - kind-control-plane
Dharman
  • 30,962
  • 25
  • 85
  • 135
jpetazzo
  • 14,874
  • 3
  • 43
  • 45
  • This worked flawlessly. Thanks a lot for your help! – Madhuri Devidi Nov 01 '21 at 19:25
  • Also this helped in mounting my local laptop files onto the kind control plane cluster's node - https://stackoverflow.com/questions/62694361/how-to-reference-a-local-volume-in-kind-kubernetes-in-docker – Madhuri Devidi Nov 01 '21 at 19:26