Environment Description:
4 machines installed GlusterFS to form a distributed replicated volumes cluster
Server:
10.64.42.96
10.64.42.113
10.64.42.115
10.64.42.117
Client:
10.64.42.98
1. Preparatory work
Close Iptables and SELinux
2. Installing the Glusterfs server
4 Server Installation Glusterfs
Yum install centos-release-gluster
yum install-y glusterfs glusterfs-server glusterfs-fuse GLUSTERFS-RDMA
3. Start
Systemctl Start Glusterd
Set boot up
Systemctl Enable Glusterd
4. Join Trusted Storage Pool
Execute Gluster peer probe at any node, this article performs in 10.64.42.113
Gluster peer probe 10.64.42.115
Gluster peer probe 10.64.42.117 gluster
peer probe 10.64.42.96
View node Information
Gluster Peer Status
5. Create a data store directory
Create directory for all nodes
Mkdir-p/gluster/data
6. Create distributed replicated Volumes
Gluster Volume Create File-service Replica 2 transport TCP 10.64.42.113:/gluster/data 10.64.42.115:/gluster/data 10.64.42.117:/gluster/data 10.64.42.96:/gluster/data
This command means using replicated to create a volume called File-service (Volume) with a memory block (Brick) of 4
Start Volume
Gluster Volume start File-service
View volume status
Gluster Volume Info
7. Installing the Client
Executing on the client
Yum-y Install Glusterfs Glusterfs-fuse
Create a Directory
Mkdir/gluster/data
Hang the logical volume File-service on the server to the local/gluster/data
Mount-t Glusterfs 10.64.42.113:/file-service/gluster/data
View hangs on
Df-h
8. Testing
Mount the directory/gluster/data the client to establish a file to test whether Glusterfs is working correctly.
Cd/gluster/data
Touch file1 file2 file3
Because the distributed replicated Volumes is established, the files written on the client will appear in both 10.64.42.113:/gluster/data and 10.64.42.115:/gluster/data or both in 10.64.42.117:/gluster/data and 10.64.42.96:/gluster/data.
Attention
Volume Create:file-service:failed:the Brick
10.53.32.113:/gluster/data is being created in the root partition. It is recommended so you don't use the system's root partition for
Storage backend. Or use "Force" at the end of the command if you want
To override this behavior.
This is because we created the brick in the system disk, which is not allowed by default in Gluster, if this is necessary, use force
GlusterFS Several volume modes description:
1, distributed Volumes, default mode, DHT
Gluster Volume Create Test-volume server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
2, replicated Volumes, copy mode, AFR
Gluster Volume Create Test-volume Replica 2 transport TCP SERVER1:/EXP1 SERVER2:/EXP2
Avoid brain fissures and join arbitration
Gluster Volume Create <VOLNAME> replica 3 arbiter 1 host1:brick1 host2:brick2 Host3:brick3 '
3, striped Volumes
Gluster volume Create Test-volume Stripe 2 Transport TCP SERVER1:/EXP1 SERVER2:/EXP2
4, distributed striped Volumes, minimum 4 servers required.
Gluster Volume Create Test-volume Stripe 4 transport TCP SERVER1:/EXP1 SERVER2:/EXP2 server3:/exp3 server4:/exp4 server5:/ EXP5 SERVER6:/EXP6 SERVER7:/EXP7 SERVER8:/EXP8
5, distributed replicated Volumes, minimum 4 servers required.
Gluster Volume Create Test-volume Replica 2 transport tcp SERVER1:/EXP1 SERVER2:/EXP2 SERVER3:/EXP3 SERVER4:/EXP4
6. Distributed striped Replicated Volumes
Gluster volume Create Test-volume stripe 2 Replica 2 transport TCP SERVER1:/EXP1 SERVER2:/EXP2 SERVER3:/EXP3 SERVER4:/EXP4 SERVER5:/EXP5 SERVER6:/EXP6 SERVER7:/EXP7 SERVER8:/EXP8
7, striped replicated Volumes
Gluster volume Create Test-volume stripe 2 Replica 2 transport TCP SERVER1:/EXP1 SERVER2:/EXP2 SERVER3:/EXP3 SERVER4:/EXP4