Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Block storage

Ceph block storage interacts directly with RADOS and a separate daemon is therefore not required (unlike CephFS and RGW). A Ceph block device is known as a RADOS Block Device (or simply an RBD device) and is available from a newly deployed Ceph cluster. This also makes RBD highly available by default.

RBD client usage

This section will provide optional instructions for verifying the RBD service by setting up a simple client environment. Deploy the client using the steps provided in the Client setup appendix.

An example deployment will have a juju status output similar to the following:

Model  Controller     Cloud/Region     Version  SLA          Timestamp
ceph   my-controller  my-maas/default  3.5.2    unsupported  20:34:16Z

App           Version  Status  Scale  Charm         Channel       Rev  OS      Notes
ceph-mon      18.2.0   active      3  ceph-mon      reef/stable    93  ubuntu  
ceph-osd      18.2.0   active      3  ceph-osd      reef/stable   528  ubuntu
ceph-client   22.04    active      1  ubuntu        stable         18  ubuntu  

Unit             Workload  Agent  Machine  Public address  Ports   Message
ceph-client/0*   active    idle   3        10.0.0.240              ready
ceph-mon/0       active    idle   0/lxd/1  10.0.0.247              Unit is ready and clustered
ceph-mon/1       active    idle   1/lxd/1  10.0.0.242              Unit is ready and clustered
ceph-mon/2*      active    idle   2/lxd/1  10.0.0.249              Unit is ready and clustered
ceph-osd/0       active    idle   0        10.0.0.229              Unit is ready (2 OSD)
ceph-osd/1*      active    idle   1        10.0.0.230              Unit is ready (2 OSD)
ceph-osd/2       active    idle   2        10.0.0.252              Unit is ready (2 OSD)

The client host is represented by the ceph-client/0 unit.

Create a Ceph pool (‘libvirt-pool’), an RBD user (‘client.libvirt’), collect the user’s keyring file, and transfer it to the client:

juju run --wait ceph-mon/0 create-pool name=libvirt-pool app-name=rbd

juju exec --unit ceph-mon/0 -- \
   sudo ceph auth get-or-create client.libvirt \
   mon 'profile rbd' osd 'profile rbd pool=libvirt-pool' | \
   tee ceph.client.libvirt.keyring

juju scp ceph.client.libvirt.keyring ceph-client/0:

Connect to the client:

juju ssh ceph-client/0

From the RBD client,

Configure the client using the keyring file and set up the correct permissions:

sudo mv ~ubuntu/ceph.client.libvirt.keyring /etc/ceph
sudo chmod 600 /etc/ceph/ceph.client.libvirt.keyring
sudo chown ubuntu: /etc/ceph/ceph.client.libvirt.keyring

Install the requisite image creation software and verify that an RBD image can be created:

sudo apt install -y qemu-utils
qemu-img create -f raw rbd:libvirt-pool/image-4d:id=libvirt 4G

From the Juju client,

RBD images/pools can be inspected by querying the cluster with various commands:

juju ssh ceph-mon/0 sudo rbd -p libvirt-pool ls
juju ssh ceph-mon/0 sudo rados df --pool libvirt-pool

This page was last modified 4 months ago. Help improve this document in the forum.