Questions tagged [librbd]

11 questions
3
votes
2 answers

c++ undefined reference to a class member function

I encountered the following linking issues when trying to use librbd. The following is my code snippet. main.cc #include #include #include int main(){ // Initialize and open an rbd image …
toliu
  • 33
  • 4
2
votes
1 answer

Understanding snapshots in Ceph

Our team is currently deciding whether to implement snapshotting on cephfs directories or not, and thus trying to understand the effects and performance issues caused by snapshots on the cluster. Our main concern is "How will the cluster be affected…
2
votes
2 answers

How to set IO limit on RBD image (ceph qos settings)

From the ceph doc : librbd supports limiting per image IO, controlled by the following settings. Running the commands from the doc prints unknown options qos .... I haven't found anything on the web so far. Can anyone help me please?
aiqency
  • 1,015
  • 1
  • 8
  • 22
1
vote
0 answers

ceph rbd import hangs

My ceph cluster is 48 ms ping away from the ceph client. Rbd import of an 8GB image on the client hangs at some point during the copy and never progresses. Ctl-C out of rbd import leaves the image locked in the cluster. When I scp the image to the…
B Abali
  • 433
  • 2
  • 10
0
votes
0 answers

How rbd clone v2 works

I am currently facing a problem. Currently I am trying to mirror the cloned image using clone v2. Below is the information of the image I cloned. [root@ossrccephm1 ~]# rbd info data/clone1 rbd image 'clone1': size 1 GiB in 256 objects …
gyunn35
  • 3
  • 2
0
votes
0 answers

Too many arguments when creating rbd

I tried to create rbd using below command: #rbd create kube/ceph-image –size 4096 But the result is "rbd: too many arguments", anyone could know how to solve this issue?
0
votes
0 answers

ceph cache-flush-evict-all get err:failed to evict /rbd_header.xxx: (16) Device or resource busy before remove cache tier

The situation is similar with [ceph-users] Cannot remove cache tier . The same as: The total size and the number of stored objects in the rbd-cache pool oscillate around 5 GB and 3K, respectively, while "rados -p rbd-cache cache-flush-evict-all" is…
Victor Lee
  • 2,467
  • 3
  • 19
  • 37
0
votes
1 answer

rook-ceph fio benchmark with ioengine=rbd

I have deployed storageclass.yaml found under rook/cluster/examples/kubernetes/ceph/csi/rbd/ directory and created a PVC claim. I need to fio benchmark with ioengine=rbd. In my fio config file I need to set the following: clientname=…
user3304297
  • 131
  • 2
  • 8
0
votes
0 answers

Why does RBD snap id start from 4?

I'm a newbee Ceph developer, and recenlty reading code of snapshots. From pg_pool_t::add_unmanaged_snap, it's obvious that the first RBD snapshot id should start from 2, but in reality, it starts from 4, I wonder whether there are some organisms in…
ghost
  • 41
  • 3
0
votes
1 answer

qemu-img convert rbd volumes between different ceph clusters accelerate

Is there an elegant way to copy an RBD volume to another Ceph cluster? I calculate the convert time with qemu-img 2.5 version or qemu-img 6.0 version, by copying a volume(capability is 2.5T and 18G only used) to another Ceph cluster. qemu-img [2.5…
Victor Lee
  • 2,467
  • 3
  • 19
  • 37
0
votes
1 answer

Fio results are steadily increasing IOPS, not what I expected

I'm trying to somehow test my rbd storage with random read, random write, mixed randrw, but the output is not correct, it is a sequential growing number. What is wrong with my steps? This is the fio file that I ran: ; fio-rand-write.job for…