Ceph – distribute data evenly in near full pool

Sometimes when a pool or an OSD is nearful in Ceph, rebalancing tends to stop and you end up with a Ceph cluster in a warn state. The following couple of commands will help to move data off of nearful osd’s and distribute the data more equally across the osds.

The following command gets and osd map along with meta-data and outputs it to a file called map_latest:

ceph osd getmap -o map_latest

The following command outputs osd placement optimizations into a results.txt file:

osdmaptool map_latest --upmap results.txt --upmap-pool YOUR_POOL_NAME --upmap-max 20 --upmap-deviation 3 --upmap-active

The following command reads in and applies the optimizations to OSD’s in YOUR_POOL_NAME:

source results.txt