7

My question is about PersistentVolumeClaim I have one node cluster setup on aws ec2 I am trying to create a storage class using kubernetes.io/host-path as Provisioner.

yaml file content for storage class as follows,

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: my-storage
annotations:
    storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
    kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path

yaml file content for PersistentVolumeClaim as follows,

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
annotations:
    volume.beta.kubernetes.io/storage-class: my-storage
spec:
accessModes:
    - ReadWriteOnce
resources:
    requests:
    storage: 3Gi

When I am trying to create storage class and PVC on minikube, it's working. It is creating volume on minikube in /tmp/hostpath_volume/ But, When I am trying similar thing on one node cluster setup on aws ec2, I am getting following error

Failed to create provisioner: Provisioning in volume plugin "kubernetes.io/host-path" is disabled

I can see this error when I do the kubectl describe pvc task-pv-claim, Also as, PV is not created, so claim is in pending state

I found something like kube-controller-manager which shows --enable-dynamic-provisioning and --enable-hostpath-provisioner in its option but don't know how to use it.

Yudi
  • 831
  • 4
  • 10
  • 19
  • What version of kubernetes are you running? Is the hostpath provisioner running as a pod in your cluster? – jaxxstorm Apr 06 '17 at 10:19
  • I am using kubectl version 1.5.2. I didn't get your second question. I am newbie for this k8s and all. Did you mean 'kubernetes.io/host-path' running as a pod? – Yudi Apr 06 '17 at 10:30

1 Answers1

9

It seems you might not be running the provisioner itself, so there's nothing to actually do the work of creating the hostpath directory.

Take a look here

The way this works is that the hostpath provisioner reads from the kubernetes API, and watches for you to create a storage class (which you've done) and a persistentvolumeclaim (also done).

When those exist, the provisioner (which is running as a pod) will go an execute a mkdir to create the hostpath.

Run the following:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/docs/demo/hostpath-provisioner/pod.yaml

And then recreate your storageclass and pvc

caligari
  • 2,110
  • 20
  • 25
jaxxstorm
  • 12,422
  • 5
  • 57
  • 67
  • 1
    Your solution seems totally perfect. Thanks for that sir. But, I am facing issue while doing your solution. When I tried to execute command given by you, pod got deployed but with 'ErrImagePull' status. Then I tried pulling image from https://hub.docker.com/r/jaxxstorm/hostpath-provisioner/tags/ and tried deploying but getting "F0406 19:52:27.642967 7 hostpath-provisioner.go:125] Failed to create config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined" Can you please help me out here? – Yudi Apr 06 '17 at 19:53
  • by default, kubernetes should map a token so it can talk to the API, but in your case that isn't happening. It seems your cluster isn't functioning correctly. I would open a new question, and detail how exactly you bootstrapped your cluster – jaxxstorm Apr 06 '17 at 20:04
  • @Yudi we chatted on slack, can you accept the answer – jaxxstorm Apr 08 '17 at 07:06
  • 2
    It seems that example code from the answer was moved to https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/blob/master/examples/hostpath-provisioner/pod.yaml Although the documentation was not kept. It still can be found here https://github.com/kubernetes-incubator/external-storage/tree/96450ffccd05e1fe85c52f5b60a61b350a774afd/docs/demo/hostpath-provisioner – Petr Gladkikh Jan 15 '19 at 14:42