Centos7 build GlusterFS
Lab requirements:
· Four machines are installed with GlusterFS to form a cluster
· The client stores docker registry in the file system
· The hard disk space of the four nodes is not integrated into one hard disk space. Each node must be stored in one copy to ensure data security.
Environment Planning
Server
Node1: 192.168.0.165 Host Name: glusterfs1
Node2: 192.168.0.157 Host Name: glusterfs2
Node3: 192.168.0.166 Host Name: glusterfs3
Node4: 192.168.0.150 Host Name: glusterfs4
Client
192.168.0.164 Host Name: master3
Prerequisites
· All Hosts close the firewall and SElinux
· Modify the hosts file to parse each other
192.168.0.165glusterfs1
192.168.0.157glusterfs2
192.168.0.166glusterfs3
192.168.0.150glusterfs4
192.168.0.164master3
Install
Server
1. Install the GlusrerFS package on the glusterfs {1-4} Node
#wget-P/etc/yum.repos.d
#yuminstall-yglusterfsglusterfs-serverglusterfs-fuse
#servicegluterdstart
#chkconfiggluterdon
2. Configure the entire GlusterFS cluster on the glusterfs1 node and add each node to the cluster.
[root@glusterfs1~]#glusterpeerprobeglusterfs1
1peerprobe:success:onlocalhostnotneeded
[root@glusterfs1~]#glusterpeerprobeglusterfs2
1peerprobe:success
[root@glusterfs1~]#glusterpeerprobeglusterfs2
1peerprobe:success
[root@glusterfs1~]#glusterpeerprobeglusterfs2
1peerprobe:success
3. View node status
[root@glusterfs1~]#glusterpeerstatus
4. Create a data storage directory on glusterfs {1-4}
#mkdir-p/usr/local/share/models
5. Create a GlusterFS disk on glusterfs1
Note:
Replica 4 is one of the four nodes, each of which stores data once, that is, four copies of data and one copy of each node.
If replica 4 is not added, the disk space of the four nodes is integrated into a hard disk,
[root@glusterfs1~]#glustervolumecreatemodelsreplica4glusterfs1:/usr/local/share/modelsglusterfs2:/usr/local/share/modelsglusterfs3:/usr/local/share/modelsglusterfs4:/usr/local/share/modelsforce
1volumecreate:models:success:pleasestartthevolumetoaccessdata
6. Start
[root@glusterfs1~]#glustervolumestartmodels
Client
1. Deploy the GlusterFS client and mount the GlusterFS File System
[root@master3~]#wget-P/etc/yum.repos.d
[root@master3~]#yuminstall-yglusterfsglusterfs-fuse
[root@master3~]#mkdir-p/mnt/models
[root@master3~]#mount-tglusterfs-oroglusterfs1:models/mnt/models/
2. view results
[root@master3~]#df-h
FilesystemSizeUsedAvailUse%Mountedon
/dev/vda3289G5.6G284G2%/
devtmpfs3.9G03.9G0%/dev
tmpfs3.9G80K3.9G1%/dev/shm
tmpfs3.9G169M3.7G5%/run
tmpfs3.9G03.9G0%/sys/fs/cgroup
/dev/vda11014M128M887M13%/boot
glusterfs1:models189G3.5G186G2%/mnt/models
Other operation commands
Delete A GlusterFS Disk
# Glustervolumestopmodels stop first
# Delete glustervolumedeletemodels
Detach a GlusterFS Disk
glusterpeerdetachglusterfs4
ACL Access Control
glustervolumesetmodelsauth.allow10.60.1.*,10.70.1.*
Add a GlusterFS Node
#glusterpeerprobesc2-log5
#glusterpeerprobesc2-log6
#glustervolumeadd-brickmodelssc2-log5:/data/glustersc2-log6:/data/gluster
Migrate GlusterFS data
#glustervolumeremove-brickmodelssc2-log1:/usr/local/share/modelssc2-log5:/usr/local/share/modelsstart
#glustervolumeremove-brickmodelssc2-log1:/usr/local/share/modelssc2-log5:/usr/local/share/modelsstatus
#glustervolumeremove-brickmodelssc2-log1:/usr/local/share/modelssc2-log5:/usr/local/share/modelscommit
Fix GlusterFS data (when Node 1 is down)
#glustervolumereplace-brickmodelssc2-log1:/usr/local/share/modelssc2-log5:/usr/local/share/modelscommit-force
#glustervolumehealmodelsfull