Throttle impact of backfill and recovery¶
Whenever a problem arises, like a broken disk or a pool in degraded state, Ceph tries to recover as quickly as possible. This, however, may have an impact on the operation of the whole cluster and the users’ experience.
You can define a tradeoff between recovery speed and cluster reactiveness by playing with a number of parameters. For example, from a Ceph admin or server node, you can execute:
$ ceph tell osd.* injectargs '--osd-max-backfills 1'
$ ceph tell osd.* injectargs '--osd-recovery-threads 1'
$ ceph tell osd.* injectargs '--osd-recovery-op-priority 1'
$ ceph tell osd.* injectargs '--osd-client-op-priority 63'
$ ceph tell osd.* injectargs '--osd-recovery-max-active 1'
Note that settings applied this way will be lost upon OSD server reboot or restart of the ceph-osd service for a specific disk. If you need to make settings permanent, you will have to act on Ceph configuration file.
You can verify the value of current settings (the default ones, active at boot time) with the command:
$ ceph config dump
Quick way to throttle speed of backfill and recovery is to combine some of the settings previously show, and also act on recovery sleep parameters, which are used to introduce a small lag between recovery operations (the longer the value, the nicer you will be to your cluster users): default value for hdd is 0.1s while for ssd the default value is 0.0s. For example:
# full-steam for hdd quick recovery
$ ceph --cluster ceph tell osd.* injectargs --osd-recovery-sleep-hdd=0.0 --osd-max-backfills=8 --osd-recovery-max-active=8 --osd-recovery-max-single-start=4
# kind-and-gentle recovery
$ ceph --cluster ceph tell osd.* injectargs --osd-recovery-sleep-hdd=0.1 --osd-max-backfills=2 --osd-recovery-max-active=1 --osd-recovery-max-single-start=1