3

I have set up 3 node kubernetes using 3 VPS and installed rook/ceph.

when I run

kubectl exec -it rook-ceph-tools-78cdfd976c-6fdct -n rook-ceph bash
ceph status

I get the below result

osd: 0 osds: 0 up, 0 in

I tried

ceph device ls

and the result is

DEVICE  HOST:DEV  DAEMONS  LIFE EXPECTANCY

ceph osd status gives me no result

This is the yaml file that I used

https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml

When I use the below command

sudo kubectl -n rook-ceph logs rook-ceph-osd-prepare-node1-4xddh provision

results are

2021-05-10 05:45:09.440650 I | cephosd: skipping device "sda1" because it contains a filesystem "ext4"
2021-05-10 05:45:09.440653 I | cephosd: skipping device "sda2" because it contains a filesystem "ext4"
2021-05-10 05:45:09.475841 I | cephosd: configuring osd devices: {"Entries":{}}
2021-05-10 05:45:09.475875 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2021-05-10 05:45:09.476221 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list  --format json
2021-05-10 05:45:10.057411 D | cephosd: {}
2021-05-10 05:45:10.057469 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2021-05-10 05:45:10.057501 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2021-05-10 05:45:10.541968 D | cephosd: {}
2021-05-10 05:45:10.551033 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2021-05-10 05:45:10.551274 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "node1"

My disk partition

root@node1: lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   400G  0 disk 
├─sda1   8:1    0   953M  0 part /boot
└─sda2   8:2    0 399.1G  0 part /

What am I doing wrong here?

jeril
  • 1,109
  • 2
  • 17
  • 35

2 Answers2

2

I have similar problem that OSD doesn't appear in ceph status, after I install and teardown for test multiple times.

I fixed this issue by running

dd if=/dev/zero of=/dev/sdX bs=1M status=progress

to completely remove any information on such raw block disk.

j3ffyang
  • 1,049
  • 12
  • 12
1

I guess for roof ceph to work I should attach an additional raw volume to my nodes, since it does not allow directories mounted on the main disk.

It looks like this now

root@node1:~/marketing-automation-agency# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   400G  0 disk 
├─sda1   8:1    0   953M  0 part /boot
└─sda2   8:2    0 399.1G  0 part /
jeril
  • 1,109
  • 2
  • 17
  • 35
  • Yes indeed you need a raw disk call it as sdb. Otherwise Ceph wont work at all. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 32G 0 disk |-sda1 8:1 0 600M 0 part /boot/efi |-sda2 8:2 0 1G 0 part /boot `-sda3 8:3 0 30.4G 0 part |-rhel-root 253:0 0 27.2G 0 lvm / `-rhel-swap 253:1 0 3.2G 0 lvm sdb 8:16 0 100G 0 disk – Sanjeev Nov 04 '22 at 20:48
  • This was created by me, please see once https://github.com/rook/rook/issues/11153 – Sanjeev Nov 05 '22 at 10:24