0

I deployed rook ceph on my local cluster and deployed all 3 storage options classes. I tried object storage and I can push files with s5cmd and pull them, but with block or file storage doesn't seems to work.

I'm using their examples from csi/cephfs|rbd with pvc.yaml and pod.yaml The error on pvc is [only cephfs]:

Name:          cephfs-pvc
Namespace:     default
StorageClass:  rook-cephfs
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       csicephfs-demo-pod
Events:
  Type     Reason                Age                  From                                                                                                             Message
  ----     ------                ----                 ----                                                                                                             -------
  Normal   Provisioning          57s (x9 over 3m6s)   rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-f88787bbb-w48xj_8f0b3f76-8ef9-4d1c-a353-2d4dc1b7b2bf  External provisioner is provisioning volume for claim "default/cephfs-
pvc"
  Warning  ProvisioningFailed    57s (x9 over 3m6s)   rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-f88787bbb-w48xj_8f0b3f76-8ef9-4d1c-a353-2d4dc1b7b2bf  failed to provision volume with StorageClass "rook-cephfs": rpc error:
 code = InvalidArgument desc = volume not found
  Normal   ExternalProvisioning  14s (x13 over 3m6s)  persistentvolume-controller                                                                                      waiting for a volume to be created, either by external provisioner "ro
ok-ceph.cephfs.csi.ceph.com" or manually created by system administrator

The error on pod is:

Name:         csicephfs-demo-pod
Namespace:    default
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  web-server:
    Image:        nginx
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:
      /var/lib/www/html from mypvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdzbb (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  mypvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  cephfs-pvc
    ReadOnly:   false
  kube-api-access-mdzbb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  3m28s  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  3m13s  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.

Is it someting that I'm doing wrong or I should do more stuff? The health looks good on ceph status:

bash-4.4$ ceph status
  cluster:
    id:     b8a962dc-caaf-4417-849a-777e49a3fc39
    health: HEALTH_WARN
            clock skew detected on mon.c
 
  services:
    mon: 3 daemons, quorum a,b,c (age 2h)
    mgr: a(active, since 10m), standbys: b
    osd: 3 osds: 3 up (since 2h), 3 in (since 2h)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    pools:   9 pools, 257 pgs
    objects: 422 objects, 698 KiB
    usage:   81 MiB used, 15 GiB / 15 GiB avail
    pgs:     257 active+clean
Astin Gengo
  • 379
  • 3
  • 17
  • The health is not good, you should fix this clock skew between your ceph nodes. To obtain information about the rbd/cephfs problem you should take a look inside rook-ceph operator container. Did you added pools for rbd and cephfs? Can you update the post with "ceph health detail" and "ceph osd lspools"? – Hector Vido Feb 19 '23 at 00:30
  • I don't have that environment up and running anymore :-( – Astin Gengo Feb 23 '23 at 05:46

0 Answers0