1

I am experimenting with OpenEBS as storage provider for our Kubernetes cluster. OpenEBS is installed via helm on a cluster consisting of 5 nodes, created by Rancher. It seems to work, however I don't really understand how the volume itself is provisioned.

Each node is created with 2 disks, with logical volumes spanning over the disks. For example:

$ lsblk
NAME                                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                       8:0    0   20G  0 disk 
├─sda1                                    8:1    0    1G  0 part /boot
└─sda2                                    8:2    0   19G  0 part 
  ├─centos_intern--rancher--node05-root 253:0    0   50G  0 lvm  /
  └─centos_intern--rancher--node05-swap 253:1    0  7,9G  0 lvm  [SWAP]
sdb                                       8:16   0   80G  0 disk 
└─sdb1                                    8:17   0   80G  0 part 
  ├─centos_intern--rancher--node05-root 253:0    0   50G  0 lvm  /
  └─centos_intern--rancher--node05-home 253:2    0 41,1G  0 lvm  /home

The node device manage (NDM) is configured with a filter excluding loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md. So far, so good.

When we list the block device resources created by NDM, it lists 2 resources for this node (other nodes are omitted)

> kubectl get blockdevice --all-namespaces   
NAMESPACE   NAME                                           NODENAME                SIZE          CLAIMSTATE   STATUS   AGE
openebs     blockdevice-d7d2b90b000a8b2268faf07c9e0f7cc5   intern-rancher-node05   85899345920   Unclaimed    Active   18h
openebs     sparse-e4ea6423e7d139104049e67566a2b634        intern-rancher-node05   10737418240   Unclaimed    Active   18

Exploring the created blockdevice, we see that it uses /dev/sdb as disk:

> kubectl describe blockdevice blockdevice-d7d2b90b000a8b2268faf07c9e0f7cc5 -n openebs
Name:         blockdevice-d7d2b90b000a8b2268faf07c9e0f7cc5
  ...
  Node Attributes:
    Node Name:  intern-rancher-node05
  Partitioned:  No
  Path:         /dev/sdb
Status:
  Claim State:  Unclaimed
  State:        Active
Events:         <none>

So here stops my understanding. Why did NDM pick /dev/sdb, and not /dev/sda? What is the difference between the disks that one is used and the other not? Should /dev/sdb not be skipped because it is in use by the logical volumes? If I create a persistent volume, does this limit the size of my logical volumes (/home)?

Also, if I create a persistent volume claim (using jiva), a persistent volume is created in /var/openebs, for example /var/openebs/pvc-cdc4c5a2-89e1-41ed-b9e7-c672f27a8bed. Does this mean it doesn't use the disk at all but stores everything in the filesystem in the logical volume?

Jonas
  • 121,568
  • 97
  • 310
  • 388
denniss17
  • 35
  • 1
  • 6

0 Answers0