7.16. Ceph Filesystem (CephFS)

Storage pool type: cephfs

CephFS implements a POSIX-compliant filesystem, using a Ceph storage cluster to store its data. As CephFS builds upon Ceph, it shares most of its properties. This includes redundancy, scalability, self-healing, and high availability.

Tip

Proxmox VE can manage Ceph setups Chapter 8, Deploy Hyper-Converged Ceph Cluster, which makes configuring a CephFS storage easier. As modern hardware offers a lot of processing power and RAM, running storage services and VMs on same node is possible without a significant performance impact.

To use the CephFS storage plugin, you must replace the stock Debian Ceph client, by adding our Ceph repository Section 3.1.5, “Ceph Pacific Repository”. Once added, run apt update, followed by apt dist-upgrade, in order to get the newest packages.

Warning

Please ensure that there are no other Ceph repositories configured. Otherwise the installation will fail or there will be mixed package versions on the node, leading to unexpected behavior.

This backend supports the common storage properties nodes, disable, content, as well as the following cephfs specific properties:

Configuration example for an external Ceph cluster (/etc/pve/storage.cfg). 

cephfs: cephfs-external
        monhost 10.1.1.20 10.1.1.21 10.1.1.22
        path /mnt/pve/cephfs-external
        content backup
        username admin

If you use cephx authentication, which is enabled by default, you need to copy the secret from your external Ceph cluster to a Proxmox VE host.

Create the directory /etc/pve/priv/ceph with

mkdir /etc/pve/priv/ceph

Then copy the secret

scp cephfs.secret <proxmox>:/etc/pve/priv/ceph/<STORAGE_ID>.secret

The secret must be renamed to match your <STORAGE_ID>. Copying the secret generally requires root privileges. The file must only contain the secret key itself, as opposed to the rbd backend which also contains a [client.userid] section.

A secret can be received from the Ceph cluster (as Ceph admin) by issuing the command below, where userid is the client ID that has been configured to access the cluster. For further information on Ceph user management, see the Ceph docs [9].

ceph auth get-key client.userid > cephfs.secret

If Ceph is installed locally on the PVE cluster, that is, it was set up using pveceph, this is done automatically.

The cephfs backend is a POSIX-compliant filesystem, on top of a Ceph cluster.

[1] While no known bugs exist, snapshots are not yet guaranteed to be stable, as they lack sufficient testing.