Introduction
Fedora 40 brings an exciting release, featuring GlusterFS 11, an advanced distributed file system known for its scalability and performance. GlusterFS 11 allows for seamless file storage across multiple servers, making it an excellent choice for large-scale data management and storage solutions.
Overview
What
Fedora 40 introduces GlusterFS 11, a robust distributed file system designed for handling large amounts of data by distributing it across multiple servers. This system is perfect for environments that require scalable, flexible, and efficient storage solutions.
Who
GlusterFS 11 is developed and maintained by the Gluster community, an open-source project under the auspices of the Fedora Project. It is primarily used by organizations and enterprises that require scalable storage solutions.
Where
GlusterFS 11 can be implemented in data centers, cloud environments, and on-premises infrastructures where there is a need for high-availability and high-performance storage systems. It is particularly useful in large-scale data environments such as enterprises, research institutions, and hosting providers.
When
Fedora 40, featuring GlusterFS 11, was released as part of Fedora's regular update cycle. This ensures users have access to the latest technologies and improvements in the field of distributed storage.
Why
GlusterFS 11 offers several advantages for users needing a reliable distributed storage solution:
Pros | Cons |
---|---|
High scalability and flexibility | Complex setup and management |
Enhanced performance for large data sets | Potential latency issues in certain configurations |
Open-source and community-supported | Requires regular updates and maintenance |
Cost-effective as it utilizes commodity hardware | Limited support for small files |
How
Implementing GlusterFS 11 involves several steps, including installation, configuration, and ongoing management:
Installation | Install GlusterFS packages on all nodes, configure firewall settings, and set up trusted storage pools. |
Configuration | Create and configure volumes, replicate data across nodes, and optimize performance settings. |
Management | Monitor performance, handle volume expansions, and perform regular updates and maintenance. |
Consequences
Implementing GlusterFS 11 in Fedora 40 can lead to several consequences:
Positive |
|
Negative |
|
Conclusion
Fedora 40's inclusion of GlusterFS 11 highlights the ongoing commitment to providing advanced and scalable storage solutions. This release is ideal for organizations that need efficient and cost-effective ways to manage large volumes of data. With its combination of flexibility, performance, and community support, GlusterFS 11 is poised to meet the evolving needs of modern data environments.
Install GlusterFS 11
Install GlusterFS to Configure Storage Cluster. It is strongly recommended to use partitions for GlusterFS volumes that are different from the / partition. On this example, it shows settings on the environment that all nodes has [sdb1] and mount it to [/glusterfs].
Step [1]Install GlusterFS Server on all Nodes in Cluster.
[root@node01 ~]# dnf -y install glusterfs-server
[root@node01 ~]# systemctl enable --now glusterd
[root@node01 ~]# gluster --version
glusterfs 11.1
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
Step [2]If Firewalld is running, allow GlusterFS service on all nodes.
[root@node01 ~]# firewall-cmd --add-service=glusterfs
success
[root@node01 ~]# firewall-cmd --runtime-to-permanent
success
That's OK, Refer to the next section to Configure Cluster.
Distributed Configuration
Configure Storage Clustering with GlusterFS. For example, create a distributed volume with 2 Nodes. This example shows to use 2 Nodes but it's possible to use more than 3 Nodes.
| +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] | | node01.bizantum.lab +----------+----------+ node02.bizantum.lab | | | | | +----------------------+ +----------------------+ ⇑ ⇑ file1, file3 ... file2, file4 ...
It is strongly recommended to use partitions for GlusterFS volumes that are different from the / partition. On this example, it shows settings on the environment that all nodes has [sdb1] and mount it to [/glusterfs].
Step [1]Install GlusterFS Server on All Nodes, refer to here.
Step [2]Create a Directory for GlusterFS Volume on all Nodes.
[root@node01 ~]# mkdir -p /glusterfs/distributed
Step [3] Configure Clustering like follows on a node. (it's OK on any node)
# probe nodes
[root@node01 ~]# gluster peer probe node02
peer probe: success.
# confirm status
[root@node01 ~]# gluster peer status
Number of Peers: 1
Hostname: node02
Uuid: 16ab2132-e7d2-47aa-96b6-f210b1bc74f0
State: Peer in Cluster (Connected)
# create volume
[root@node01 ~]# gluster volume create vol_distributed transport tcp \
node01:/glusterfs/distributed \
node02:/glusterfs/distributed
volume create: vol_distributed: success: please start the volume to access data
# start volume
[root@node01 ~]# gluster volume start vol_distributed
volume start: vol_distributed: success
# confirm volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_distributed
Type: Distribute
Volume ID: 8adaf69f-eb06-432a-a053-6219f8cbc1e7
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
Step [4]To mount GlusterFS volume on Client Hosts, refer to here.
GlusterFS Client
Configure GlusterFS Client to mount GlusterFS volumes.
Step [1]Configure GlusterFS Client to mount GlusterFS volumes.
[root@client ~]# dnf -y install glusterfs glusterfs-fuse
# OK to specify any target nodes in cluster
[root@client ~]# mount -t glusterfs node01.bizantum.lab:/vol_distributed /mnt
[root@client ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/fedora-root xfs 15G 1.9G 14G 13% /
devtmpfs devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 782M 1.1M 781M 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /tmp
/dev/vda2 xfs 960M 344M 617M 36% /boot
tmpfs tmpfs 391M 4.0K 391M 1% /run/user/0
node01.bizantum.lab:/vol_distributed fuse.glusterfs 30G 4.1G 26G 14% /mnt
# verify reading and writing
[root@client ~]# echo "Gluster write test" > /mnt/testfile.txt
[root@client ~]# cat /mnt/testfile.txt
Gluster write test
GlusterFS + NFS-Ganesha
Install NFS-Ganesha and integrate with GlusterFS to mount Gluster Volume with NFS protocol. The supported NFS protocols by NFS-Ganesha are v3, v4.0, v4.1, pNFS. For example, Configure NFS Export setting to a Gluster Volume [vol_distributed] like an example of the link here.
Step [1]If NFS server is running, stop and disable it.
[root@node01 ~]# systemctl disable --now nfs-server
Step [2]Install and Configure NFS-Ganesha on a Node in GlusterFS Cluster.
[root@node01 ~]# dnf -y install nfs-ganesha-gluster
[root@node01 ~]# mv /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.org
[root@node01 ~]# vi /etc/ganesha/ganesha.conf
# create new
NFS_CORE_PARAM {
# possible to mount with NFSv3 to NFSv4 Pseudo path
mount_path_pseudo = true;
# NFS protocol
Protocols = 3,4;
}
EXPORT_DEFAULTS {
# default access mode
Access_Type = RW;
}
EXPORT {
# unique ID
Export_Id = 101;
# mount path of Gluster Volume
Path = "/vol_distributed";
FSAL {
# any name
name = GLUSTER;
# hostname or IP address of this Node
hostname="10.0.0.51";
# Gluster volume name
volume="vol_distributed";
}
# config for root Squash
Squash="No_root_squash";
# NFSv4 Pseudo path
Pseudo="/vfs_distributed";
# allowed security options
SecType = "sys";
}
LOG {
# default log level
Default_Log_Level = WARN;
}
[root@node01 ~]# systemctl enable --now nfs-ganesha
# verify mount
[root@node01 ~]# showmount -e localhost
Export list for localhost:
/vfs_distributed (everyone)
Step [3]Install and Configure NFS-Ganesha on a Node in GlusterFS Cluster.
[root@node01 ~]# firewall-cmd --add-service=nfs
success
[root@node01 ~]# firewall-cmd --runtime-to-permanent
success
Step [4]Verify NFS mounting on a Client Host.
[root@client ~]# dnf -y install nfs-utils
# specify Pseudo path set on [Pseudo=***] in ganesha.conf
[root@client ~]# mount -t nfs4 node01.bizantum.lab:/vfs_distributed /mnt
[root@client ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/fedora-root xfs 15G 1.9G 14G 13% /
devtmpfs devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 782M 1.1M 781M 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /tmp
/dev/vda2 xfs 960M 344M 617M 36% /boot
tmpfs tmpfs 391M 4.0K 391M 1% /run/user/0
node01.bizantum.lab:/vfs_distributed nfs4 30G 4.1G 26G 14% /mnt
# verify reading and writing
[root@client ~]# echo "Gluster NFS write test" > /mnt/testfile.txt
[root@client ~]# cat /mnt/testfile.txt
Gluster NFS write test
GlusterFS + SMB
Configure GlusterFS volume to enable SMB protocol. For example, Configure SMB setting to a Gluster Volume [vol_distributed] like an example of the link here.
Step [1]Configure GlusterFS to enable SMB setting on a Node in GlusterFS Cluster.
[root@node01 ~]# dnf -y install samba ctdb samba-vfs-glusterfs
# stop the target Gluster volume and change settings
[root@node01 ~]# gluster volume stop vol_distributed
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: vol_distributed: success
[root@node01 ~]# gluster volume set vol_distributed user.smb enable
volume set: success
[root@node01 ~]# gluster volume set vol_distributed performance.write-behind off
volume set: success
[root@node01 ~]# gluster volume set vol_distributed group samba
volume set: success
[root@node01 ~]# vi /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
# line 25 : change to the target Gluster volume name
META="vol_distributed"
[root@node01 ~]# vi /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
# line 13 : change to the target Gluster volume name
META="vol_distributed"
# start Gluster volume
[root@node01 ~]# gluster volume start vol_distributed
volume start: vol_distributed: success
# with the settings above, following mounting is done automatically
[root@node01 ~]# df -h /gluster/lock
Filesystem Size Used Avail Use% Mounted on
node01.bizantum.lab:/vol_distributed.tcp 30G 4.1G 26G 14% /gluster/lock
[root@node01 ~]# tail -1 /etc/fstab
node01.bizantum.lab:/vol_distributed /gluster/lock glusterfs _netdev,transport=tcp,xlator-option=*client*.ping-timeout=10 0 0
[root@node01 ~]# vi /etc/ctdb/nodes
# create new
# write all Nodes that configure target Gluster volume
10.0.0.51
10.0.0.52
[root@node01 ~]# vi /etc/ctdb/public_addresses
# create new
# set virtual IP address for SMB access
# [enp1s0] means network interface name ⇒ replace to your environment
10.0.0.59/24 enp1s0
[root@node01 ~]# systemctl enable --now ctdb
# confirm status
[root@node01 ~]# ctdb status
Number of nodes:2
pnn:0 10.0.0.51 OK (THIS NODE)
pnn:1 10.0.0.52 DISCONNECTED|UNHEALTHY|INACTIVE
Generation:1656904014
Size:1
hash:0 lmaster:0
Recovery mode:NORMAL (0)
Leader:0
[root@node01 ~]# ctdb ip
Public IPs on node 0
10.0.0.59 0
Step [2]Configure Samba. For example, Create a shared Folder that users in [smbgroup] group can only access to shared folder [smbshare] and also they are required user authentication.
# mount Gluster volume with GlusterFS Native and create a shared folder for SMB access
[root@node01 ~]# mount -t glusterfs node01.bizantum.lab:/vol_distributed /mnt
[root@node01 ~]# mkdir /mnt/smbshare
[root@node01 ~]# groupadd smbgroup
[root@node01 ~]# chgrp smbgroup /mnt/smbshare
[root@node01 ~]# chmod 770 /mnt/smbshare
[root@node01 ~]# umount /mnt
[root@node01 ~]# vi /etc/samba/smb.conf
[global]
workgroup = SAMBA
security = user
passdb backend = tdbsam
printing = cups
printcap name = cups
load printers = yes
cups options = raw
# add follows
clustering = yes
kernel share modes = no
kernel oplocks = no
map archive = no
map hidden = no
map read only = no
map system = no
store dos attributes = yes
# following 9 lines are configured automatically
[gluster-vol_distributed]
comment = For samba share of volume vol_distributed
vfs objects = glusterfs
glusterfs:volume = vol_distributed
glusterfs:logfile = /var/log/samba/glusterfs-vol_distributed.%M.log
glusterfs:loglevel = 7
path = /
read only = no
kernel share modes = no
# add follows
writable = yes
valid users = @smbgroup
force group = smbgroup
force create mode = 770
force directory mode = 770
inherit permissions = yes
[root@node01 ~]# systemctl enable --now smb
# add Samba user
[root@node01 ~]# useradd fedora
[root@node01 ~]# smbpasswd -a fedora
New SMB password: # set any SMB password
Retype new SMB password:
Added user fedora.
[root@node01 ~]# usermod -aG smbgroup fedora
Step [3]If SELinux is enabled, change policy.
[root@node01 ~]# setsebool -P use_fusefs_home_dirs on
[root@node01 ~]# setsebool -P samba_load_libgfapi on
[root@node01 ~]# setsebool -P domain_kernel_load_modules on
Step [4]If Firewalld is running, allow services.
[root@node01 ~]# firewall-cmd --add-service={samba,ctdb}
success
[root@node01 ~]# firewall-cmd --runtime-to-permanent
success
Step [5]Verify it can access to the target share with SMB from any Linux client computer. The examples below are on Linux clients but it's possible to access from Windows clients with common way.
# verify with [smbclient]
[root@client ~]# smbclient //node01.bizantum.lab/gluster-vol_distributed -U fedora
Password for [SAMBA\fedora]:
Try "help" to get a list of possible commands.
# verify witable to move to shared folder
smb: \> cd smbshare
smb: \smbshare\> mkdir testdir
smb: \smbshare\> ls
. D 0 Thu Apr 25 16:01:44 2024
.. D 0 Thu Apr 25 15:59:02 2024
testdir D 0 Thu Apr 25 16:01:44 2024
31326208 blocks of size 1024. 26952828 blocks available
smb: \smbshare\> exit
Add Nodes (Bricks)
Add Nodes (Bricks) to existing Cluster. For example, Add a Node [node03] to the existing Cluster like follows.
| +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] | | node01.bizantum.lab +----------+----------+ node02.bizantum.lab | | | | | | +----------------------+ | +----------------------+ ⇑ | ⇑ file1, file3 ... | file2, file4 ... | +----------------------+ | | [GlusterFS Server#3] |10.0.0.53 | | node03.bizantum.lab +----------+ | | +----------------------+
Step [1]Install GlusterFS to a New Node, refer to here, and then Create a directory for GlusterFS volume on the same Path with other Nodes.
Step [2]Add a New Node to existing Cluster on a node. (OK on any existing node)
# probe new node
[root@node01 ~]# gluster peer probe node03
peer probe: success.
# confirm status
[root@node01 ~]# gluster peer status
Number of Peers: 2
Hostname: node02
Uuid: 16ab2132-e7d2-47aa-96b6-f210b1bc74f0
State: Peer in Cluster (Connected)
Hostname: node03
Uuid: e0808df8-113d-44fc-97df-713209874cbf
State: Peer in Cluster (Connected)
# confirm existing volume
[root@node01 ~]# gluster volume info
Volume Name: vol_distributed
Type: Distribute
Volume ID: c64af7b5-edc1-42ce-9502-0aa3bbff445b
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
# add new node
[root@node01 ~]# gluster volume add-brick vol_distributed node03:/glusterfs/distributed
volume add-brick: success
# confirm volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_distributed
Type: Distribute
Volume ID: c64af7b5-edc1-42ce-9502-0aa3bbff445b
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Brick3: node03:/glusterfs/distributed
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
# after adding new node, run rebalance volume
[root@node01 ~]# gluster volume rebalance vol_distributed fix-layout start
volume rebalance: vol_distributed: success: Rebalance on vol_distributed has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: e4d823f6-7397-4a95-9af8-296c6af9f52d
Remove Nodes (Bricks)
Remove Nodes (Bricks) from existing Cluster. For example, Remove a Node [node03] from the existing Cluster.
| +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] | | node01.bizantum.lab +----------+----------+ node02.bizantum.lab | | | | | | +----------------------+ | +----------------------+ ⇑ | ⇑ file1, file3 ... | file2, file4 ... | +----------------------+ | | [GlusterFS Server#3] |10.0.0.53 | | node03.bizantum.lab +----------+ | | +----------------------+
Step [1]Install GlusterFS to a New Node, refer to here, and then Create a directory for GlusterFS volume on the same Path with other Nodes.
Step [2]Add a New Node to existing Cluster on a node. (OK on any existing node)
# probe new node
[root@node01 ~]# gluster peer probe node03
peer probe: success.
# confirm status
[root@node01 ~]# gluster peer status
Number of Peers: 2
Hostname: node02
Uuid: 16ab2132-e7d2-47aa-96b6-f210b1bc74f0
State: Peer in Cluster (Connected)
Hostname: node03
Uuid: e0808df8-113d-44fc-97df-713209874cbf
State: Peer in Cluster (Connected)
# confirm existing volume
[root@node01 ~]# gluster volume info
Volume Name: vol_distributed
Type: Distribute
Volume ID: c64af7b5-edc1-42ce-9502-0aa3bbff445b
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
# add new node
[root@node01 ~]# gluster volume add-brick vol_distributed node03:/glusterfs/distributed
volume add-brick: success
# confirm volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_distributed
Type: Distribute
Volume ID: c64af7b5-edc1-42ce-9502-0aa3bbff445b
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/distributed
Brick2: node02:/glusterfs/distributed
Brick3: node03:/glusterfs/distributed
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
# after adding new node, run rebalance volume
[root@node01 ~]# gluster volume rebalance vol_distributed fix-layout start
volume rebalance: vol_distributed: success: Rebalance on vol_distributed has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: e4d823f6-7397-4a95-9af8-296c6af9f52d
Replication Configuration
Configure Storage Clustering with GlusterFS. For example, Create a Replication volume with 3 Nodes. It's possible to create Replication volume with 2 Nodes but it's not recommended because split-brain syndrome maybe happens on [replica 2 volume]. As a countermeasure for split-brain, it is recommended to create volume with more than 3 Nodes or set [arbiter volume].
| +----------------------+ | +----------------------+ | [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] | | node01.bizantum.lab +----------+----------+ node02.bizantum.lab | | | | | | +----------------------+ | +----------------------+ | +----------------------+ | | [GlusterFS Server#3] |10.0.0.53 | | node03.bizantum.lab +----------+ | | +----------------------+
It is strongly recommended to use partitions for GlusterFS volumes that are different from the / partition. On this example, it shows settings on the environment that all nodes has [sdb1] and mount it to [/glusterfs].
Step [1]Install GlusterFS Server on All Nodes, refer to here
Step [2]Create a Directory for GlusterFS Volume on all Nodes.
[root@node01 ~]# mkdir -p /glusterfs/replica
Step [3]Configure Clustering like follows on a node. (it's OK on any node)
# probe the node
[root@node01 ~]# gluster peer probe node02
peer probe: success.
[root@node01 ~]# gluster peer probe node03
peer probe: success.
# confirm status
[root@node01 ~]# gluster peer status
Number of Peers: 2
Hostname: node02
Uuid: 16ab2132-e7d2-47aa-96b6-f210b1bc74f0
State: Peer in Cluster (Connected)
Hostname: node03
Uuid: e0808df8-113d-44fc-97df-713209874cbf
State: Peer in Cluster (Connected)
# create volume
[root@node01 ~]# gluster volume create vol_replica replica 3 transport tcp \
node01:/glusterfs/replica \
node02:/glusterfs/replica \
node03:/glusterfs/replica
volume create: vol_replica: success: please start the volume to access data
# start volume
[root@node01 ~]# gluster volume start vol_replica
volume start: vol_replica: success
# confirm volume info
[root@node01 ~]# gluster volume info
Volume Name: vol_replica
Type: Replicate
Volume ID: 4d6a977f-4e29-4a42-a0bf-763380305b18
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node01:/glusterfs/replica
Brick2: node02:/glusterfs/replica
Brick3: node03:/glusterfs/replica
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Step [4]To mount GlusterFS volume on Client Hosts, refer to here.
- Get link
- X
- Other Apps
Comments
Post a Comment
Thank you for your comment! We appreciate your feedback, feel free to check out more of our articles.
Best regards, Bizantum Blog Team.