1

It's sad that I found it's not allowed by ceph cli to decrease the value of pg_num for a specific pool.

ceph osd pool set .rgw.root pg_num 32

The error is shown:
Error EEXIST: specified pg_num 32 <= current 128

The tutorial from placement-groups is about to tell me what is it and how to set the best value of it. But there is seldom any tutorial about how to reduce the pg_num without re-installing ceph or delete the pool firstly, like ceph-reduce-the-pg-number-on-a-pool.

The existed SO thread ceph-too-many-pgs-per-osd shows us how to decide the best value. If I met the issue, how can I recover from the mess?

If it's not easy to reduce the value pg_num, what's the story behind it? Why doesn't ceph expose the interface to reduce it?

Eugene
  • 10,627
  • 5
  • 49
  • 67

1 Answers1

3

Nautilus version allows pg_num changes without restrictions (and pg_autoscale).
If you want to increase/reduce pg_num/pgp_num values without having to create, copy & rename pools (as suggested on your link), the best option is to upgrade to Nautilus.

dodger
  • 101
  • 6
  • Checked the updated just now, looks like a significant modification. Thanks for this answer. – Eugene Aug 21 '19 at 05:53