Enable Amazon S3 interface for Ceph inside Proxmox

Enabling Ceph in Proxmox is a fantastic way to provide scalable and redundant storage for your VMs.

Enabling the Amazon S3 interface for Ceph opens your storage up to third party applications that require a standard storage interface.

One the main concerns we had about doing this was that the hypervisors would be directly exposed via S3. And although S3 is a well established protocol, I don't believe it is good practice to expose hypervisors in any way. With that said, I decided to create a VM, add the VM to the Proxmox cluster and then expose S3 on the VM.

Without too much waffling, let's get started 🙂

Environment summary

Assume we have three nodes (node1, node2, node3) in a Proxmox cluster.

Step 1

Start by creating the keyring on node1:

ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
run on node1

Step 2

Now generate the keys and add them to the keyring created above:

ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.node1 --gen-key
ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.node2 --gen-key
ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.node3 --gen-key
run on node1

Step 3

Next, add the capabilities to each of the keys:

ceph-authtool -n client.radosgw.node1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
ceph-authtool -n client.radosgw.node2 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
ceph-authtool -n client.radosgw.node3 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
run on node1

Step 4

Now add the keys to the cluster:

ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.node1 -i /etc/ceph/ceph.client.radosgw.keyring
ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.node2 -i /etc/ceph/ceph.client.radosgw.keyring
ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.node3 -i /etc/ceph/ceph.client.radosgw.keyring
run on node1

Step 5

Copy the keyring in to the Proxmox ClusterFS:

cp /etc/ceph/ceph.client.radosgw.keyring /etc/pve/priv
run on node1

Step 6

Edit /etc/ceph/ceph.conf and paste the text below in this file. Be sure to change s3.yourdomain.com to the domain you want to use to access the S3 interface.

[client.radosgw.node1]
        host = node1
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = s3.yourdomain.com

[client.radosgw.node2]
        host = node2
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = s3.yourdomain.com

[client.radosgw.node3]
        host = node3
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.rados.$host.log
        rgw_dns_name = s3.yourdomain.com
run on node1

Step 7

Next, login to each of the nodes and install the radosgw package:

apt install radosgw
run on node1, node2, and node3

Step 8

Next, start the gateway on each of the nodes:

systemctl start ceph-radosgw@radosgw.node1
run on node1
systemctl start ceph-radosgw@radosgw.node2
run on node2
systemctl start ceph-radosgw@radosgw.node3
run on node3

Step 9

Now enable the following on node1:

ceph osd pool application enable .rgw.root rgw
ceph osd pool application enable default.rgw.control rgw
ceph osd pool application enable default.rgw.data.root rgw
ceph osd pool application enable default.rgw.gc rgw
ceph osd pool application enable default.rgw.log rgw
ceph osd pool application enable default.rgw.users.uid rgw
ceph osd pool application enable default.rgw.users.email rgw
ceph osd pool application enable default.rgw.users.keys rgw
ceph osd pool application enable default.rgw.buckets.index rgw
ceph osd pool application enable default.rgw.buckets.data rgw
ceph osd pool application enable default.rgw.lc rgw
run on node1

Step 10

Create the admin user:

radosgw-admin user create --uid=testuser --display-name="Test User" --email=test.user@example.net
run on node1

By default radosgw creates a default pool which might not be desired. In this case you can set the default pool which is useful if you have dedicated SSD and HDD pools. Assuming the pool you want S3 to use is called hdd_pool with a placement group called hddgroup and index pool called hddgroupindex you can run the following commands to set the default pool:

radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id hddgroup
radosgw-admin zone placement add --rgw-zone default --placement-id hddgroup --data-pool pool_hdd --index-pool hddgroupindex --data-extra-pool default.rgw.temporary.non-ec
radosgw-admin zonegroup placement default --rgw-zonegroup default --placement-id hddgroup
run on node1