Build GlusterFS in CentOS 7
Lab requirements:
- Install GlusterFS on four machines to form a cluster
- The client stores docker registry in the file system.
- The hard disk space of the four nodes is not integrated into one hard disk space. Each node must be stored in one copy to ensure data security.
Environment Planning
Server
Node1: 192.168.0.165 Host Name: glusterfs1
Node2: 192.168.0.157 Host Name: glusterfs2
Node3: 192.168.0.166 Host Name: glusterfs3
Node4: 192.168.0.150 Host Name: glusterfs4
Client
192.168.0.164 Host Name: master3
Prerequisites
- All Hosts close the firewall, SElinux
- Modify the hosts file to parse each other
192.168.0.165 glusterfs1
192.168.0.157 glusterfs2
192.168.0.166 glusterfs3
192.168.0.150 glusterfs4
192.168.0.164 master3
Install
Server
1. Install the GlusrerFS package on the glusterfs {1-4} Node
# Wget-P/etc/yum. repos. d
# Yum install-y glusterfs-server glusterfs-fuse
# Service gluterd start
# Chkconfig gluterd on
2. Configure the entire GlusterFS cluster on the glusterfs1 node and add each node to the cluster.
[Root @ glusterfs1 ~] # Gluster peer probe glusterfs1
1 peer probe: success: on localhost not needed
[Root @ glusterfs1 ~] # Gluster peer probe glusterfs2
1 peer probe: success
[Root @ glusterfs1 ~] # Gluster peer probe glusterfs2
1 peer probe: success
[Root @ glusterfs1 ~] # Gluster peer probe glusterfs2
1 peer probe: success
3. View node status
[Root @ glusterfs1 ~] # Gluster peer status
4. Create a data storage directory on glusterfs {1-4}
# Mkdir-p/usr/local/share/models
5. Create a GlusterFS disk on glusterfs1
Note:
Replica 4 is one of the four nodes, each of which stores data once, that is, four copies of data and one copy of each node.
If replica 4 is not added, the disk space of the four nodes is integrated into a hard disk,
[Root @ glusterfs1 ~] # Gluster volume create models replica 4 glusterfs1:/usr/local/share/models glusterfs2:/usr/local/share/models glusterfs3:/usr/local/share/models glusterfs4: /usr/local/share/models force
1 volume create: models: success: please start the volume to access data
6. Start
[Root @ glusterfs1 ~] # Gluster volume start models
Client
1. Deploy the GlusterFS client and mount the GlusterFS File System
[Root @ master3 ~] # Wget-P/etc/yum. repos. d
[Root @ master3 ~] # Yum install-y glusterfs-fuse
[Root @ master3 ~] # Mkdir-p/mnt/models
[Root @ master3 ~] # Mount-t glusterfs-o ro glusterfs1: models/mnt/models/
2. view results
[Root @ master3 ~] # Df-h
Filesystem Size Used Avail Use % Mounted on
/Dev/vda3 289G 5.6G 284G 2%/
Devtmpfs 3.9G 0 3.9G 0%/dev
Tmpfs 3.9G 80 K 3.9G 1%/dev/shm
Tmpfs 3.9G 169 M 3.7G 5%/run
Tmpfs 3.9G 0 3.9G 0%/sys/fs/cgroup
/Dev/vda1 1014 M 128 M 887 M 13%/boot
Glusterfs1: models 189G 3.5G 186G 2%/mnt/models
Other operation commands
Delete A GlusterFS Disk
# Gluster volume stop models stop first
# Gluster volume delete models and then delete
Detach a GlusterFS Disk
Gluster peer detach glusterfs4
ACL Access Control
Gluster volume set models auth. allow 10.60.1. *, 10.70.1 .*
Add a GlusterFS Node
# Gluster peer probe sc2-log5
# Gluster peer probe sc2-log6
# Gluster volume add-brick models sc2-log5:/data/gluster sc2-log6:/data/gluster
Migrate GlusterFS data
# Gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start
# Gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models status
# Gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit
Fix GlusterFS data (when Node 1 is down)
# Gluster volume replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit-force
# Gluster volume heal models full
Use GlusterFS as KVM backend storage
Distributed Storage System GlusterFS initial experience
GlusterFS globally unified namespace
Design new Xlator extension GlusterFS
GlusterFS Rebalance Analysis
Use Glusterfs in CentOS 6.0-x86_64
For details about GlusterFS, click here
GlusterFS: click here
This article permanently updates the link address: