Introduction
This guide provides a detailed overview of Debian 12 Bookworm and Ceph Pacific, covering What, Who, Where, When, Why, How, the consequences, and the conclusion.
Overview
What
Debian 12 Bookworm is a version of the Debian operating system that includes updated features and security enhancements. Ceph Pacific is a storage platform that provides scalable object, block, and file storage.
Who
Debian 12 is maintained by the Debian Project, a community of developers and users. Ceph Pacific is managed by the Ceph community, which includes contributors from various organizations and individuals.
Where
Debian 12 and Ceph Pacific are used worldwide, from personal computers to large-scale enterprise environments. They can be deployed on-premises or in cloud environments.
When
Debian 12 Bookworm was released in June 2023. Ceph Pacific was released in April 2021 and continues to receive updates and support.
Why
Using Debian 12 and Ceph Pacific offers various benefits and challenges, as outlined below:
Pros | Cons |
---|---|
Stable and secure operating system. | Can be challenging for new users to set up. |
Ceph provides scalable and flexible storage solutions. | Requires significant hardware resources. |
Community-driven with extensive support and documentation. | Complex configuration and maintenance. |
How
To implement Debian 12 Bookworm and Ceph Pacific, follow these steps:
Step 1: | Download and install Debian 12 from the official Debian website. |
Step 2: | Set up the necessary repositories for Ceph Pacific. |
Step 3 | Follow the installation and configuration guidelines provided by the Ceph community. |
Step 4 | Test the setup in a staging environment before going live. |
Consequences
Implementing Debian 12 Bookworm with Ceph Pacific can lead to various outcomes:
Positive |
|
Negative |
|
Conclusion
Debian 12 Bookworm and Ceph Pacific together offer a robust solution for scalable, secure storage. While they come with their own set of challenges, the benefits make them a worthwhile choice for those in need of reliable and flexible storage solutions.
Configure Cluster #1
Install Distributed File System Ceph to Configure Storage Cluster. For example on here, Configure Ceph Cluster with 3 Nodes like follows. Furthermore, each Storage Node has a free block device to use on Ceph Nodes. (use [/dev/sdb] on this example)
+------------------------------+------------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-------------+ +-----------+-------------+ +-----------+-------------+ | [node01.bizantum.lab] | | [node02.bizantum.lab] | | [node03.bizantum.lab] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-------------------------+ +-------------------------+ +-------------------------+
Step [1]Generate SSH key-pair on [Monitor Daemon] Node (call it Admin Node on here) and set it to each Node. Configure key-pair with no-passphrase as [root] account on here. If you use a common account, it also needs to configure Sudo. If you set passphrase to SSH kay-pair, it also needs to set SSH Agent.
root@node01:~# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
root@node01:~# vi ~/.ssh/config
# create new (define each Node and SSH user)
Host node01
Hostname node01.bizantum.lab
User root
Host node02
Hostname node02.bizantum.lab
User root
Host node03
Hostname node03.bizantum.lab
User root
root@node01:~# chmod 600 ~/.ssh/config
# transfer public key
root@node01:~# ssh-copy-id node01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'node01.bizantum.lab (10.0.0.51)' can't be established.
ED25519 key fingerprint is SHA256:gKPKEitSkM9Ya0mzG6no7NXsLVpa+SmyHkoJP8p0J6I.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node01.bizantum.lab's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'node01'"
and check to make sure that only the key(s) you wanted were added.
root@node01:~# ssh-copy-id node02
root@node01:~# ssh-copy-id node03
Step [2]Install Ceph to each Node from Admin Node.
root@node01:~# for NODE in node01 node02 node03
do
ssh $NODE "apt update; apt -y install ceph"
done
Step [3]Configure [Monitor Daemon], [Manager Daemon] on Admin Node.
root@node01:~# uuidgen
f6eabaad-6442-481b-bfb1-0bb79de773e3
# create new config
# file name ⇒ (any Cluster Name).conf
# set Cluster Name [ceph] (default) on this example ⇒ [ceph.conf]
root@node01:~# vi /etc/ceph/ceph.conf
[global]
# specify cluster network for monitoring
cluster network = 10.0.0.0/24
# specify public network
public network = 10.0.0.0/24
# specify UUID genarated above
fsid = f6eabaad-6442-481b-bfb1-0bb79de773e3
# specify IP address of Monitor Daemon
mon host = 10.0.0.51
# specify Hostname of Monitor Daemon
mon initial members = node01
osd pool default crush rule = -1
# mon.(Node name)
[mon.node01]
# specify Hostname of Monitor Daemon
host = node01
# specify IP address of Monitor Daemon
mon addr = 10.0.0.51
# allow to delete pools
mon allow pool delete = true
# generate secret key for Cluster monitoring
root@node01:~# ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /etc/ceph/ceph.mon.keyring
# generate secret key for Cluster admin
root@node01:~# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
creating /etc/ceph/ceph.client.admin.keyring
# generate key for bootstrap
root@node01:~# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
# import generated key
root@node01:~# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
root@node01:~# ceph-authtool /etc/ceph/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
# generate monitor map
root@node01:~# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'})
root@node01:~# NODENAME=$(grep "^mon initial" /etc/ceph/ceph.conf | awk {'print $NF'})
root@node01:~# NODEIP=$(grep "^mon host" /etc/ceph/ceph.conf | awk {'print $NF'})
root@node01:~# monmaptool --create --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
monmaptool: set fsid to f6eabaad-6442-481b-bfb1-0bb79de773e3
monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors)
# create a directory for Monitor Daemon
# directory name ⇒ (Cluster Name)-(Node Name)
root@node01:~# mkdir /var/lib/ceph/mon/ceph-node01
# associate key and monmap to Monitor Daemon
# --cluster (Cluster Name)
root@node01:~# ceph-mon --cluster ceph --mkfs -i $NODENAME --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
root@node01:~# chown ceph:ceph /etc/ceph/ceph.*
root@node01:~# chown -R ceph:ceph /var/lib/ceph/mon/ceph-node01 /var/lib/ceph/bootstrap-osd
root@node01:~# systemctl enable --now ceph-mon@$NODENAME
# enable Messenger v2 Protocol
root@node01:~# ceph mon enable-msgr2
root@node01:~# ceph config set mon auth_allow_insecure_global_id_reclaim false
# enable Placement Groups auto scale module
root@node01:~# ceph mgr module enable pg_autoscaler
# create a directory for Manager Daemon
# directory name ⇒ (Cluster Name)-(Node Name)
root@node01:~# mkdir /var/lib/ceph/mgr/ceph-node01
# create auth key
root@node01:~# ceph auth get-or-create mgr.$NODENAME mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.node01]
key = AQCr1I9ki5vjHRAA6v8l+njtODvbRdDlp+1ypw==
root@node01:~# ceph auth get-or-create mgr.node01 | tee /etc/ceph/ceph.mgr.admin.keyring
root@node01:~# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-node01/keyring
root@node01:~# chown ceph:ceph /etc/ceph/ceph.mgr.admin.keyring
root@node01:~# chown -R ceph:ceph /var/lib/ceph/mgr/ceph-node01
root@node01:~# systemctl enable --now ceph-mgr@$NODENAME
Step [4]Confirm Cluster status. That's OK if [Monitor Daemon] and [Manager Daemon] are enabled like follows. For OSD (Object Storage Device), Configure them on next section, so it's no problem if [HEALTH_WARN] at this point.
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum node01 (age 2m)
mgr: node01(active, since 39s)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
Configure Cluster #2
Install Distributed File System Ceph to Configure Storage Cluster. For example on here, Configure Ceph Cluster with 3 Nodes like follows. Furthermore, each Storage Node has a free block device to use on Ceph Nodes. (use [/dev/sdb] on this example)
+------------------------------+------------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-------------+ +-----------+-------------+ +-----------+-------------+ | [node01.bizantum.lab] | | [node02.bizantum.lab] | | [node03.bizantum.lab] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-------------------------+ +-------------------------+ +-------------------------+
Step [1]Configure [Monitor Daemon] and [Manager Daemon] first, refer to here.
Step [2] Configure OSD (Object Storage Device) to each Node from Admin Node.
# configure settings for OSD to each Node
root@node01:~# for NODE in node01 node02 node03
do
if [ ! ${NODE} = "node01" ]
then
scp /etc/ceph/ceph.conf ${NODE}:/etc/ceph/ceph.conf
scp /etc/ceph/ceph.client.admin.keyring ${NODE}:/etc/ceph
scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${NODE}:/var/lib/ceph/bootstrap-osd
fi
ssh $NODE \
"chown ceph:ceph /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*; \
parted --script /dev/sdb 'mklabel gpt'; \
parted --script /dev/sdb "mkpart primary 0% 100%"; \
ceph-volume lvm create --data /dev/sdb1"
done
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b462202e-109e-4957-bc97-c71b8e74b062
Running command: vgcreate --force --yes ceph-ac45b627-274f-4b8b-baf4-8a81769a1d14 /dev/sdb1
stdout: Physical volume "/dev/sdb1" successfully created.
stdout: Volume group "ceph-ac45b627-274f-4b8b-baf4-8a81769a1d14" successfully created
Running command: lvcreate --yes -l 40959 -n osd-block-b462202e-109e-4957-bc97-c71b8e74b062 ceph-ac45b627-274f-4b8b-baf4-8a81769a1d14
stdout: Logical volume "osd-block-b462202e-109e-4957-bc97-c71b8e74b062" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
--> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-ac45b627-274f-4b8b-baf4-8a81769a1d14/osd-block-b462202e-109e-4957-bc97-c71b8e74b062
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/ln -s /dev/ceph-ac45b627-274f-4b8b-baf4-8a81769a1d14/osd-block-b462202e-109e-4957-bc97-c71b8e74b062 /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
stderr: got monmap epoch 2
--> Creating keyring file for osd.0
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b462202e-109e-4957-bc97-c71b8e74b062 --setuser ceph --setgroup ceph
stderr: 2023-06-18T23:14:19.555-0500 7f33c082a040 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
--> ceph-volume lvm prepare successful for: /dev/sdb1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ac45b627-274f-4b8b-baf4-8a81769a1d14/osd-block-b462202e-109e-4957-bc97-c71b8e74b062 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-ac45b627-274f-4b8b-baf4-8a81769a1d14/osd-block-b462202e-109e-4957-bc97-c71b8e74b062 /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-b462202e-109e-4957-bc97-c71b8e74b062
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-b462202e-109e-4957-bc97-c71b8e74b062.service → /lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@0
--> ceph-volume lvm activate successful for osd ID: 0
--> ceph-volume lvm create successful for: /dev/sdb1
ceph.conf 100% 273 359.7KB/s 00:00
ceph.client.admin.keyring 100% 151 753.7KB/s 00:00
ceph.keyring 100% 129 609.2KB/s 00:00
.....
.....
# confirm cluster status
# that's OK if [HEALTH_OK]
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_OK
services:
mon: 1 daemons, quorum node01 (age 9m)
mgr: node01(active, since 7m)
osd: 3 osds: 3 up (since 2m), 3 in (since 2m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 15 MiB used, 480 GiB / 480 GiB avail
pgs: 1 active+clean
# confirm OSD tree
root@node01:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.46857 root default
-3 0.15619 host node01
0 hdd 0.15619 osd.0 up 1.00000 1.00000
-5 0.15619 host node02
1 hdd 0.15619 osd.1 up 1.00000 1.00000
-7 0.15619 host node03
2 hdd 0.15619 osd.2 up 1.00000 1.00000
root@node01:~# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 480 GiB 480 GiB 15 MiB 15 MiB 0
TOTAL 480 GiB 480 GiB 15 MiB 15 MiB 0
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 152 GiB
root@node01:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.15619 1.00000 160 GiB 5.0 MiB 152 KiB 0 B 4.9 MiB 160 GiB 0.00 1.01 1 up
1 hdd 0.15619 1.00000 160 GiB 5.0 MiB 152 KiB 0 B 4.8 MiB 160 GiB 0.00 1.00 1 up
2 hdd 0.15619 1.00000 160 GiB 5.0 MiB 152 KiB 0 B 4.8 MiB 160 GiB 0.00 1.00 1 up
TOTAL 480 GiB 15 MiB 456 KiB 0 B 14 MiB 480 GiB 0.00
MIN/MAX VAR: 1.00/1.01 STDDEV: 0
Use Block Device
Configure a Client Host [dlp] to use Ceph Storage like follows.
+----------------------+ | | [dlp.bizantum.lab] |10.0.0.30 | | Ceph Client +-----------+ | | | +----------------------+ | +------------------------------+------------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-------------+ +-----------+-------------+ +-----------+-------------+ | [node01.bizantum.lab] | | [node02.bizantum.lab] | | [node03.bizantum.lab] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-------------------------+ +-------------------------+ +-------------------------+
For example, Create a block device and mount it on a Client Host.
Step [1]Transfer SSH public key to Client Host and Configure it from Admin Node.
# transfer public key
root@node01:~# ssh-copy-id dlp
# install required packages
root@node01:~# ssh dlp "apt -y install ceph-common"
# transfer required files to Client Host
root@node01:~# scp /etc/ceph/ceph.conf dlp:/etc/ceph/
ceph.conf 100% 273 343.7KB/s 00:00
root@node01:~# scp /etc/ceph/ceph.client.admin.keyring dlp:/etc/ceph/
ceph.client.admin.keyring 100% 151 191.1KB/s 00:00
root@node01:~# ssh dlp "chown ceph:ceph /etc/ceph/ceph.*"
Step [2]Create a Block device and mount it on a Client Host.
# create default RBD pool [rbd]
root@dlp:~# ceph osd pool create rbd 32
pool 'rbd' created
# enable Placement Groups auto scale mode
root@dlp:~# ceph osd pool set rbd pg_autoscale_mode on
set pool 3 pg_autoscale_mode to on
# initialize the pool
root@dlp:~# rbd pool init rbd
root@dlp:~# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK
device_health_metrics 0 3.0 479.9G 0.0000 1.0 1 on False
rbd 19 3.0 479.9G 0.0000 1.0 32 on False
# create a block device with 10G
root@dlp:~# rbd create --size 10G --pool rbd rbd01
# confirm
root@dlp:~# rbd ls -l
NAME SIZE PARENT FMT PROT LOCK
rbd01 10 GiB 2
# map the block device
root@dlp:~# rbd map rbd01
/dev/rbd0
# confirm
root@dlp:~# rbd showmapped
id pool namespace image snap device
0 rbd rbd01 - /dev/rbd0
# format with EXT4
root@dlp:~# mkfs.ext4 /dev/rbd0
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: b826aec2-064e-4bae-9d30-b37a3ec5ee15
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
root@dlp:~# mount /dev/rbd0 /mnt
root@dlp:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs tmpfs 392M 572K 391M 1% /run
/dev/mapper/debian--vg-root ext4 28G 1.4G 26G 6% /
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda1 ext2 455M 58M 373M 14% /boot
tmpfs tmpfs 392M 0 392M 0% /run/user/0
/dev/rbd0 ext4 9.8G 24K 9.3G 1% /mnt
Step [3]For delete Block devices or Pools you created, run commands like follows. For deleting Pools, it needs to set [mon allow pool delete = true] on [Monitor Daemon] configuration.
# unmap
root@dlp:~# rbd unmap /dev/rbd/rbd/rbd01
# delete a block device
root@dlp:~# rbd rm rbd01 -p rbd
Removing image: 100% complete...done.
# delete a pool
# ceph osd pool delete [Pool Name] [Pool Name] ***
root@dlp:~# ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
pool 'rbd' removed
Use File System
Configure a Client Host [dlp] to use Ceph Storage like follows.
+----------------------+ | | [dlp.bizantum.lab] |10.0.0.30 | | Ceph Client +-----------+ | | | +----------------------+ | +------------------------------+------------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-------------+-----------+ +-----------+-------------+ +-----------+-------------+ | [node01.bizantum.lab] | | [node02.bizantum.lab] | | [node03.bizantum.lab] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-------------------------+ +-------------------------+ +-------------------------+
For example, mount as Filesystem on a Client Host.
Step [1]Transfer SSH public key to Client Host and Configure it from Admin Node.
# transfer public key
root@node01:~# ssh-copy-id dlp
# install required packages
root@node01:~# ssh dlp "apt -y install ceph-fuse"
# transfer required files to Client Host
root@node01:~# scp /etc/ceph/ceph.conf dlp:/etc/ceph/
ceph.conf 100% 273 277.9KB/s 00:00
root@node01:~# scp /etc/ceph/ceph.client.admin.keyring dlp:/etc/ceph/
ceph.client.admin.keyring 100% 151 199.8KB/s 00:00
root@node01:~# ssh dlp "chown ceph:ceph /etc/ceph/ceph.*"
Step [2]Configure MDS (MetaData Server) on a Node. Configure it on [node01] Node on this example.
Transfer SSH public key to Client Host and Configure it from Admin Node.
# create directory
# directory name ⇒ (Cluster Name)-(Node Name)
root@node01:~# mkdir -p /var/lib/ceph/mds/ceph-node01
root@node01:~# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node01/keyring --gen-key -n mds.node01
creating /var/lib/ceph/mds/ceph-node01/keyring
root@node01:~# chown -R ceph:ceph /var/lib/ceph/mds/ceph-node01
root@node01:~# ceph auth add mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node01/keyring
added key for mds.node01
root@node01:~# systemctl enable --now ceph-mds@node01
Step [3] Create 2 RADOS pools for Data and MeataData on MDS Node. Refer to the official documents to specify the end number (64 on the example below) ⇒ http://docs.ceph.com/docs/master/rados/operations/placement-groups/
root@node01:~# ceph osd pool create cephfs_data 32
pool 'cephfs_data' created
root@node01:~# ceph osd pool create cephfs_metadata 32
pool 'cephfs_metadata' created
root@node01:~# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 5 and data pool 4
root@node01:~# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
root@node01:~# ceph mds stat
cephfs:1 {0=node01=up:active}
root@node01:~# ceph fs status cephfs
cephfs - 0 clients
======
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active node01 Reqs: 0 /s 10 13 12 0
POOL TYPE USED AVAIL
cephfs_metadata metadata 96.0k 151G
cephfs_data data 0 151G
MDS version: ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)
Step [4]Mount CephFS on a Client Host.
# Base64 encode client key
root@dlp:~# ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key
root@dlp:~# chmod 600 admin.key
root@dlp:~# mount -t ceph node01.bizantum.lab:6789:/ /mnt -o name=admin,secretfile=admin.key
root@dlp:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs tmpfs 392M 568K 391M 1% /run
/dev/mapper/debian--vg-root ext4 28G 1.4G 26G 6% /
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda1 ext2 455M 58M 373M 14% /boot
tmpfs tmpfs 392M 0 392M 0% /run/user/0
10.0.0.51:6789:/ ceph 152G 0 152G 0% /mnt
Ceph Object Gateway
Enable Ceph Object Gateway (RADOSGW) to access to Ceph Cluster Storage via Amazon S3 or OpenStack Swift compatible API. This example is based on the environment like follows.
+----------------------+ | +------------------------+ | [dlp.bizantum.lab] |10.0.0.30 | 10.0.0.31| [www.bizantum.lab] | | Ceph Client +-----------+-----------+ RADOSGW | | | | | | +----------------------+ | +------------------------+ +------------------------------+------------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-------------+ +-----------+-------------+ +-----------+-------------+ | [node01.bizantum.lab] | | [node02.bizantum.lab] | | [node03.bizantum.lab] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-------------------------+ +-------------------------+ +-------------------------+
Step [1]Transfer required files to RADOSGW Node and Configure it from Admin Node.
# transfer public key
root@node01:~# ssh-copy-id www
# install required packages
root@node01:~# ssh www "apt -y install radosgw"
root@node01:~# vi /etc/ceph/ceph.conf
# add to the end
# client.rgw.(Node Name)
[client.rgw.www]
# IP address of the Node
host = 10.0.0.31
# DNS name
rgw dns name = www.bizantum.lab
keyring = /var/lib/ceph/radosgw/ceph-rgw.www/keyring
log file = /var/log/ceph/radosgw.gateway.log
# transfer files
root@node01:~# scp /etc/ceph/ceph.conf www:/etc/ceph/
ceph.conf 100% 435 179.5KB/s 00:00
root@node01:~# scp /etc/ceph/ceph.client.admin.keyring www:/etc/ceph/
ceph.client.admin.keyring 100% 151 84.2KB/s 00:00
# configure RADOSGW
root@node01:~# ssh www \
"mkdir -p /var/lib/ceph/radosgw/ceph-rgw.www; \
ceph auth get-or-create client.rgw.www osd 'allow rwx' mon 'allow rw' -o /var/lib/ceph/radosgw/ceph-rgw.www/keyring; \
chown ceph:ceph /etc/ceph/ceph.*; \
chown -R ceph:ceph /var/lib/ceph/radosgw; \
systemctl enable --now ceph-radosgw@rgw.www"
# verify status
# that's OK if following answers shown
root@node01:~# curl www.bizantum.lab:7480
< ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
Step [2]On Object Gateway Node, Create a S3 compatible user who can authenticate to Object Gateway.
# for example, create [serverworld] user
root@www:~# radosgw-admin user create --uid=serverworld --display-name="Server World" --email=admin@bizantum.lab
{
"user_id": "serverworld",
"display_name": "Server World",
"email": "admin@bizantum.lab",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "serverworld",
"access_key": "SBYPMGYVJUT3E9TBVF7Y",
"secret_key": "IDhLDjwcvqL6jz2dc5MkF1ylTeLQnHsyMixfNucm"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
# show user list
root@www:~# radosgw-admin user list
[
"serverworld"
]
root@www:~# radosgw-admin user info --uid=serverworld
{
"user_id": "serverworld",
"display_name": "Server World",
"email": "admin@bizantum.lab",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "serverworld",
"access_key": "SBYPMGYVJUT3E9TBVF7Y",
"secret_key": "IDhLDjwcvqL6jz2dc5MkF1ylTeLQnHsyMixfNucm"
}
.....
.....
Step [3]Verify accessing with S3 interface to create Python test script on a Computer.
root@dlp:~# apt -y install python3-boto3
root@dlp:~# vi s3_test.py
import sys
import boto3
from botocore.config import Config
# user's access-key and secret-key you added on [2] section
session = boto3.session.Session(
aws_access_key_id = 'SBYPMGYVJUT3E9TBVF7Y',
aws_secret_access_key = 'IDhLDjwcvqL6jz2dc5MkF1ylTeLQnHsyMixfNucm'
)
# Object Gateway URL
s3client = session.client(
's3',
endpoint_url = 'http://10.0.0.31:7480',
config = Config()
)
# create [my-new-bucket]
bucket = s3client.create_bucket(Bucket = 'my-new-bucket')
# list Buckets
print(s3client.list_buckets())
# remove [my-new-bucket]
s3client.delete_bucket(Bucket = 'my-new-bucket')
root@dlp:~# python3 s3_test.py
{'ResponseMetadata': {'RequestId': 'tx000005fc1872fb6200cb1-00648fdf64-3793-default', 'HostId': '', 'HTTPStatusCode': 200, 'HTTPHeaders': {'transfer-encoding': 'chunked', 'x-amz-request-id': 'tx000005fc1872fb6200cb1-00648fdf64-3793-default', 'content-type': 'application/xml', 'date': 'Mon, 19 Jun 2023 04:53:56 GMT', 'connection': 'Keep-Alive'}, 'RetryAttempts': 0}, 'Buckets': [{'Name': 'my-new-bucket', 'CreationDate': datetime.datetime(2023, 6, 19, 4, 53, 53, 877000, tzinfo=tzutc())}], 'Owner': {'DisplayName': 'Server World', 'ID': 'serverworld'}}
CephFS + NFS-Ganesha
Install NFS-Ganesha to mount Ceph File System with NFS protocol. For example, Configure NFS Export setting to CephFS like here.
Step [1]Install and Configure NFS-Ganesha on CephFS Node.
root@node01:~# apt -y install nfs-ganesha-ceph
root@node01:~# mv /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.org
root@node01:~# vi /etc/ganesha/ganesha.conf
# create new
NFS_CORE_PARAM {
# disable NLM
Enable_NLM = false;
# disable RQUOTA (not suported on CephFS)
Enable_RQUOTA = false;
# NFS protocol
Protocols = 4;
}
EXPORT_DEFAULTS {
# default access mode
Access_Type = RW;
}
EXPORT {
# unique ID
Export_Id = 101;
# mount path of CephFS
Path = "/";
FSAL {
name = CEPH;
# hostname or IP address of this Node
hostname="10.0.0.51";
}
# setting for root Squash
Squash="No_root_squash";
# NFSv4 Pseudo path
Pseudo="/vfs_ceph";
# allowed security options
SecType = "sys";
}
LOG {
# default log level
Default_Log_Level = WARN;
}
root@node01:~# systemctl restart nfs-ganesha
Step [2]Verify NFS mounting on a Client Host.
root@client:~# apt -y install nfs-common
# specify Pseudo path set on [Pseudo=***] in ganesha.conf
root@client:~# mount -t nfs4 node01.bizantum.lab:/vfs_ceph /mnt
root@client:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs tmpfs 392M 588K 391M 1% /run
/dev/mapper/debian--vg-root ext4 28G 1.4G 26G 6% /
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda1 ext2 455M 58M 373M 14% /boot
tmpfs tmpfs 392M 0 392M 0% /run/user/0
node01.bizantum.lab:/vfs_ceph nfs4 152G 0 152G 0% /mnt
Enable Dashboard
Enable Ceph Dashboard to manage Ceph Cluster on Web Console. This example is based on the environment like follows.
| +----------------------+ | +------------------------+ | [dlp.bizantum.lab] |10.0.0.30 | 10.0.0.31| [www.bizantum.lab] | | Ceph Client +-----------+-----------+ RADOSGW | | | | | | +----------------------+ | +------------------------+ +------------------------------+------------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-------------+ +-----------+-------------+ +-------------+-----------+ | [node01.bizantum.lab] | | [node02.bizantum.lab] | | [node03.bizantum.lab] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-------------------------+ +-------------------------+ +-------------------------+
Step [1] Enable Dashboard module on [Manager Daemon] Node. Furthermore, Dashboard requires SSL/TLS. Create a self-signed certificate on this example.
root@node01:~# apt -y install ceph-mgr-dashboard
root@node01:~# ceph mgr module enable dashboard
# create self-signed certificate
root@node01:~# ceph dashboard create-self-signed-cert
Self-signed certificate created
# create a user for Dashboard
# [ceph dashboard ac-user-create (username) -i (password file) administrator]
root@node01:~# echo "password" > pass.txt
root@node01:~# ceph dashboard ac-user-create serverworld -i pass.txt administrator
{"username": "serverworld", "password": "$2b$12$3A6Ls7qAuqVjrxaO2qx7LOJV0TxNWQ4bpfdxtB3aYiIRKJYETKgTe", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1687150991, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": false}
# confirm Dashboard URL
root@node01:~# ceph mgr services
{
"dashboard": "https://10.0.0.51:8443/"
}
Step [2]Access to the Dashboard URL from a Client Computer with Web Browser, then Ceph Dashboard Login form is shown. Login as a user you just added in [1] section. After login, it's possible to see various status of Ceph Cluster.
Add or Remove OSDs
This is how to add or remove OSDs from existing Cluster.
| +----------------------+ | +------------------------+ | [dlp.bizantum.lab] |10.0.0.30 | 10.0.0.31| [www.bizantum.lab] | | Ceph Client +-----------+-----------+ RADOSGW | | | | | | +----------------------+ | +------------------------+ +------------------------------+------------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-------------+ +-------------+-----------+ +-----------+-------------+ | [node01.bizantum.lab] | | [node02.bizantum.lab] | | [node03.bizantum.lab] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-------------------------+ +-------------------------+ +-------------------------+
Step [1]For example, Add a [node04] node to OSDs on Admin Node. For Block device on new [node04] Node, use [/dev/sdb] on this example.
# transfer public key
root@node01:~# ssh-copy-id node04
# install required packages
root@node01:~# ssh node04 "apt update; apt -y install ceph"
# transfer required files
root@node01:~# scp /etc/ceph/ceph.conf node04:/etc/ceph/ceph.conf
root@node01:~# scp /etc/ceph/ceph.client.admin.keyring node04:/etc/ceph
root@node01:~# scp /var/lib/ceph/bootstrap-osd/ceph.keyring node04:/var/lib/ceph/bootstrap-osd
# configure OSD
root@node01:~# ssh node04 \
"chown ceph:ceph /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*; \
parted --script /dev/sdb 'mklabel gpt'; \
parted --script /dev/sdb "mkpart primary 0% 100%"; \
ceph-volume lvm create --data /dev/sdb1"
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 80de1be1-a7bd-456a-bd31-9e3c6d561659
Running command: vgcreate --force --yes ceph-d8acd8eb-5414-4fb3-b462-e8ac22dd7c63 /dev/sdb1
stdout: Physical volume "/dev/sdb1" successfully created.
stdout: Volume group "ceph-d8acd8eb-5414-4fb3-b462-e8ac22dd7c63" successfully created
Running command: lvcreate --yes -l 40959 -n osd-block-80de1be1-a7bd-456a-bd31-9e3c6d561659 ceph-d8acd8eb-5414-4fb3-b462-e8ac22dd7c63
stdout: Logical volume "osd-block-80de1be1-a7bd-456a-bd31-9e3c6d561659" created.
.....
.....
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-80de1be1-a7bd-456a-bd31-9e3c6d561659.service → /lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable --runtime ceph-osd@3
stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service → /lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@3
--> ceph-volume lvm activate successful for osd ID: 3
--> ceph-volume lvm create successful for: /dev/sdb1
# after few minutes, it's OK if HEALTH_OK
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_OK
services:
mon: 1 daemons, quorum node01 (age 5m)
mgr: node01(active, since 16s)
mds: 1/1 daemons up
osd: 4 osds: 4 up (since 4m), 4 in (since 11m)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 225 pgs
objects: 247 objects, 14 KiB
usage: 67 MiB used, 640 GiB / 640 GiB avail
pgs: 225 active+clean
Step [2]To remove an OSD Node from existing Cluster, run commands like follows. For example, Remove [node04] node.
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_OK
services:
mon: 1 daemons, quorum node01 (age 5m)
mgr: node01(active, since 16s)
mds: 1/1 daemons up
osd: 4 osds: 4 up (since 4m), 4 in (since 11m)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 225 pgs
objects: 247 objects, 14 KiB
usage: 67 MiB used, 640 GiB / 640 GiB avail
pgs: 225 active+clean
root@node01:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.62476 root default
-3 0.15619 host node01
0 hdd 0.15619 osd.0 up 1.00000 1.00000
-5 0.15619 host node02
1 hdd 0.15619 osd.1 up 1.00000 1.00000
-7 0.15619 host node03
2 hdd 0.15619 osd.2 up 1.00000 1.00000
-9 0.15619 host node04
3 hdd 0.15619 osd.3 up 1.00000 1.00000
# specify OSD ID of a node you'd like to remove
root@node01:~# ceph osd out 3
marked out osd.3.
# live watch cluster status
# after running [ceph osd out ***], rebalancing is executed automatically
# to quit live watch, push [Ctrl + c]
root@node01:~# ceph -w
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_WARN
Degraded data redundancy: 127/741 objects degraded (17.139%), 26 pgs degraded
services:
mon: 1 daemons, quorum node01 (age 6m)
mgr: node01(active, since 89s)
mds: 1/1 daemons up
osd: 4 osds: 4 up (since 5m), 3 in (since 16s); 2 remapped pgs
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 225 pgs
objects: 247 objects, 14 KiB
usage: 58 MiB used, 480 GiB / 480 GiB avail
pgs: 127/741 objects degraded (17.139%)
2/741 objects misplaced (0.270%)
194 active+clean
24 active+recovery_wait+degraded
4 active+recovery_wait
2 active+recovery_wait+undersized+degraded+remapped
1 active+recovering
io:
recovery: 119 B/s, 0 keys/s, 5 objects/s
progress:
2023-06-19T00:39:12.810651-0500 mon.node01 [WRN] Health check update: Degraded data redundancy: 61/741 objects degraded (8.232%), 14 pgs degraded (PG_DEGRADED)
.....
.....
# after status turns to [HEALTH_OK], disable OSD service on the target node
root@node01:~# ssh node04 "systemctl disable --now ceph-osd@3.service"
# remove the node to specify target OSD ID
root@node01:~# ceph osd purge 3 --yes-i-really-mean-it
purged osd.3
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_OK
services:
mon: 1 daemons, quorum node01 (age 61s)
mgr: node01(active, since 33s)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 24s), 3 in (since 5m)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 225 pgs
objects: 247 objects, 15 KiB
usage: 67 MiB used, 480 GiB / 480 GiB avail
pgs: 225 active+clean
io:
client: 3.9 KiB/s rd, 716 B/s wr, 3 op/s rd, 0 op/s wr
Add or Remove Monitor Nodes
This is how to add or remove Monitor Daemons from existing Cluster.
| +----------------------+ | +------------------------+ | [dlp.bizantum.lab] |10.0.0.30 | 10.0.0.31| [www.bizantum.lab] | | Ceph Client +-----------+-----------+ RADOSGW | | | | | | +----------------------+ | +------------------------+ +------------------------------+------------------------------+ | | | |10.0.0.51 |10.0.0.52 |10.0.0.53 +-----------+-------------+ +-----------+-------------+ +-----------+-------------+ | [node01.bizantum.lab] | | [node02.bizantum.lab] | | [node03.bizantum.lab] | | Object Storage +----+ Object Storage +----+ Object Storage | | Monitor Daemon | | | | | | Manager Daemon | | | | | +-------------------------+ +-------------------------+ +-------------------------+
Step [1]For example, Add a [node04] node for Monitor Daemon on Admin Node.
# transfer public key
root@node01:~# ssh-copy-id node04
# install required packages
root@node01:~# ssh node04 "apt update; apt -y install ceph"
# configure monitor map
root@node01:~# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'})
root@node01:~# NODENAME="node04"
root@node01:~# NODEIP="10.0.0.54"
root@node01:~# monmaptool --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
monmaptool: set fsid to f6eabaad-6442-481b-bfb1-0bb79de773e3ID /etc/ceph/monmap
monmaptool: writing epoch 0 to /etc/ceph/monmap (2 monitors)
# configure Monitor Daemin
root@node01:~# scp /etc/ceph/ceph.conf node04:/etc/ceph/ceph.conf
root@node01:~# scp /etc/ceph/ceph.mon.keyring node04:/etc/ceph
root@node01:~# scp /etc/ceph/monmap node04:/etc/ceph
root@node01:~# ssh node04 "ceph-mon --cluster ceph --mkfs -i node04 --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring"
root@node01:~# ssh node04 "chown -R ceph:ceph /etc/ceph /var/lib/ceph/mon"
root@node01:~# ssh node04 "ceph auth get mon. -o /etc/ceph/ceph.mon.keyring"
root@node01:~# ssh node04 "systemctl enable --now ceph-mon@node04"
root@node01:~# ssh node04 "ceph mon enable-msgr2"
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_OK
services:
mon: 2 daemons, quorum node01,node04 (age 7s)
mgr: node01(active, since 4m)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 4m), 3 in (since 9m)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 225 pgs
objects: 247 objects, 16 KiB
usage: 72 MiB used, 480 GiB / 480 GiB avail
pgs: 225 active+clean
# transfer public key
root@node01:~# ssh-copy-id node04
# install required packages
root@node01:~# ssh node04 "apt update; apt -y install ceph"
# configure monitor map
root@node01:~# FSID=$(grep "^fsid" /etc/ceph/ceph.conf | awk {'print $NF'})
root@node01:~# NODENAME="node04"
root@node01:~# NODEIP="10.0.0.54"
root@node01:~# monmaptool --add $NODENAME $NODEIP --fsid $FSID /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
monmaptool: set fsid to f6eabaad-6442-481b-bfb1-0bb79de773e3ID /etc/ceph/monmap
monmaptool: writing epoch 0 to /etc/ceph/monmap (2 monitors)
# configure Monitor Daemin
root@node01:~# scp /etc/ceph/ceph.conf node04:/etc/ceph/ceph.conf
root@node01:~# scp /etc/ceph/ceph.mon.keyring node04:/etc/ceph
root@node01:~# scp /etc/ceph/monmap node04:/etc/ceph
root@node01:~# ssh node04 "ceph-mon --cluster ceph --mkfs -i node04 --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring"
root@node01:~# ssh node04 "chown -R ceph:ceph /etc/ceph /var/lib/ceph/mon"
root@node01:~# ssh node04 "ceph auth get mon. -o /etc/ceph/ceph.mon.keyring"
root@node01:~# ssh node04 "systemctl enable --now ceph-mon@node04"
root@node01:~# ssh node04 "ceph mon enable-msgr2"
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_OK
services:
mon: 2 daemons, quorum node01,node04 (age 7s)
mgr: node01(active, since 4m)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 4m), 3 in (since 9m)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 225 pgs
objects: 247 objects, 16 KiB
usage: 72 MiB used, 480 GiB / 480 GiB avail
pgs: 225 active+clean
Step [2]To remove a Monitor Daemon from existing Cluster, run commands like follows. For example, Remove [node04] node.
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_OK
services:
mon: 2 daemons, quorum node01,node04 (age 7s)
mgr: node01(active, since 4m)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 4m), 3 in (since 9m)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 225 pgs
objects: 247 objects, 16 KiB
usage: 72 MiB used, 480 GiB / 480 GiB avail
pgs: 225 active+clean
# remove Monitor Daemon
root@node01:~# ceph mon remove node04
# disable monitor daemon
root@node01:~# ssh node04 "systemctl disable --now ceph-mon@node04.service"
root@node01:~# ceph -s
cluster:
id: f6eabaad-6442-481b-bfb1-0bb79de773e3
health: HEALTH_OK
services:
mon: 1 daemons, quorum node01 (age 20s)
mgr: node01(active, since 6m)
mds: 1/1 daemons up
osd: 3 osds: 3 up (since 5m), 3 in (since 11m)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 8 pools, 225 pgs
objects: 247 objects, 16 KiB
usage: 72 MiB used, 480 GiB / 480 GiB avail
pgs: 225 active+clean
- Get link
- X
- Other Apps
Comments
Post a Comment
Thank you for your comment! We appreciate your feedback, feel free to check out more of our articles.
Best regards, Bizantum Blog Team.