- Several implementations of gluster
- 1. Implementation of striped volume strip volumes
- 650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_1412426738fiRW.png "" 393 "Height =" 307 "/> as shown on the left, each file in the Strip volume is divided into four parts for storage.
- 2. Replication
- 650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_1412426741wY2J.png "" 397 "Height =" 305 "/> as shown on the left, each file in the Strip volume is saved on different servers.
- 3. Distributed volume
- 650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_1412426744dbUU.png "" 402 "Height =" 332 "/> as shown on the left, each file in the Strip volume is distributed to each node for storage.
- 4. DHT
- 650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_1412426746viZf.png "" 401 "Height =" 282 "/>
- Install gluster
- Installation environment. The three nodes are
- 172.16.24.100 node1.libincla.com
- 172.16.24.101 node2.libincla.com
- 172.16.24.102 node3.libincla.com
- In this environment, you have prepared the RPM packages required by gluster.
- The following method is used here:
Lftp 172.16.0.1:/pub/sources/6. x86_64/glusterfs> Cd ..
Lftp 172.16.0.1:/pub/sources/6. x86_64> mirror glusterfs
Total: 2 directories, 20 files, 0 symlinks
New: 20 files, 0 symlinks
10241416 bytes transferred in 2 seconds (4.07 m/s)
Lftp 172.16.0.1:/pub/sources/6. x86_64>
- Because this directory contains repodata, You can edit a new Yum source to point to this repodata.
- For example, [[email protected] glusterfs] # Vim/etc/yum. Repos. d/glusterfs. Repo
[Glfs]
Name = glfs
Baseurl = file: // root/glusterfs/
Gpgcheck = 0
- Use Yum repolist to list repo and check whether the repo takes effect
- 650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_1412426748tEQw.png "" 598 "Height =" 192 "/> can be read.
- [[Email protected] glusterfs] # Yum install glusterfs-server install the gluster server package
- 650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_1412426751DeM2.png "" 607 "Height =" 152 "/> packages to be depended on
- After the installation is complete, SCP the directory to the other two nodes.
- For example:
[[Email protected] ~] # SCP-r glusterfs/node2:/root
The authenticity of host 'node2 (172.16.24.101) 'can't be established.
RSA key fingerprint is A4: 8B: 0f: 37: A4: A0: 31: 59: 6d: BB: 56: 52: 23: F1: FC: 4C.
Are you sure you want to continue connecting (Yes/No )? Yes
Warning: Permanently added 'node2, 172.16.24.101 '(RSA) to the list of known hosts.
[Email protected]'s password:
Glusterfs-libs-3.5.2-1.el6.x86_64.rpm 100% 250kb 250.2kb/s
Glusterfs-extra-xlators-3.5.2-1.el6.x86_64.rpm 100% 46kb 45.6kb/s
Glusterfs-devel-3.5.2-1.el6.x86_64.rpm 100% 122kb 122.4kb/s
Glusterfs-server-3.5.2-1.el6.x86_64.rpm 100% 563kb 563.3kb/s
Glusterfs-fuse-3.5.2-1.el6.x86_64.rpm 100% 92kb 92.3kb/s
Glusterfs-3.5.2-1.el6.x86_64.rpm 100% 1241kb 1.2 MB/S
Be1304a9b433e6427e2bbb7e1749c5005d7454e8f7a6731da1217e2f45a2d1da-primary.sqlite.bz2 100% 12kb 11.7kb/s
Listen 100% 3785 3.7kb/s
Repomd. xml 100% 2985 2.9kb/s
Da5ee38acf32e0f9445a11d057b587634b874ce5a6dc3b900512f6afc133351f-filelists.xml.gz 100% 11kb 10.6kb/s
Listen 100% 4417 4.3kb/s
Listen 100% 1475 1.4kb/s
F9d4a1d6bfb573f34706f9cd84767959ec15599856f85b9d07a9310fda96b140-filelists.sqlite.bz2 100% 18kb 17.6kb/s
Glusterfs-rdma-3.5.2-1.el6.x86_64.rpm 100% 49kb 49.5kb/s
Glusterfs-geo-replication-3.5.2-1.el6.x86_64.rpm 100% 155kb 155.0kb/s
Glusterfs-regression-tests-3.5.2-1.el6.x86_64.rpm 100% 114kb 114.1kb/s
Glusterfs-api-3.5.2-1.el6.x86_64.rpm 100% 69kb 69213kb/s
Glusterfs-api-devel-3.5.2-1.el6.x86_64.rpm 100% 29kb 29.3kb/s
Glusterfs-cli-3.5.2-1.el6.x86_64.rpm 100% 122kb 122.5kb/s
Glusterfs-debuginfo-3.5.2-1.el6.x86_64.rpm 100% 7095kb 6.9 MB/S
- Create the required directories for each node. Here, gdata stands for the glusterfs directory.
- Mkdir-PV/gdata/brick
- Then start the service on each node.
- Service glusterd start
- Then, which port does the SS-tnlp check for listening?
[[Email protected] ~] # SS-ntlp | grep glusterd
Listen 0 128 *: 24007 *: * Users :( ("glusterd", 35476,9 ))
[[Email protected] ~] #
- 24007.
- You can use gluster peer probe <yourservername> to detect nodes.
- [[Email protected] ~] # Gluster peer probe node2
Peer Probe: success.
[[Email protected] ~] # Gluster peer probe node3
Peer Probe: success.
[[Email protected] ~] #The display is successful.
- You can also run the command gluster peer status to view the status.
[[Email protected] ~] # Gluster peer status
Number of peers: 2
Hostname: node2
UUID: 01f4c256-21a7-4b9b-a9ef-735182618bc6
State: peer in cluster (connected)
Hostname: node3
UUID: 70cd5f4e-e3db-401b-8849-ced76ac3f6c8
State: peer in cluster (connected)
- The preparation is complete. You need to create a glusterfs volume.
- Create gv0 on each node
- Mkdir/usr/gdata/brick/gv0-PV
- Use commands on any node
[[Email protected] ~] # Gluster volume create gv0 replica 3 node1:/data/brick/gv0/node2:/data/brick/gv0/node3:/data/brick/gv0/Force
Volume create: gv0: Success: please start the volume to access data
- Here I have added a force, which reminds you to prepare an idle disk in advance for glusterfs.
- Use gluster volume start gv0 to start the volume [[email protected] ~] # Gluster volume start gv0
Volume start: gv0: Success
- Run the mount command to mount the node to a host directory to check the effect.
[[Email protected] ~] # Mount-T glusterfs node2:/gv0/mnt/gluster/
[[Email protected] ~] #
Try saving a file as needed
650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_1412426753d1TG.png "" 492 "Height =" 183 "/>
650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_1412426755GDNv.png "" 496 "Height =" 262 "/>
650) This. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "Title =" image "border =" 0 "alt =" image "src =" http://img1.51cto.com/attachment/201410/4/6249823_14124267575xJt.png "" 499 "Height =" 315 "/>
Run the command to view the gv0 information.
[[Email protected] gluster] # gluster volume info gv0
Volume name: gv0
Type: Replicate
Volume ID: 44a87154-fc05-449b-bc98-266bc7673b23
Status: started
Number of bricks: 1x3 = 3
Transport-type: TCP
Bricks:
Brick1: node1:/data/brick/gv0
Brick2: node2:/data/brick/gv0
Brick3: node3:/data/brick/gv0
[[Email protected] gluster] #
- Now all three of our nodes have copies of this device. If one of the nodes stores data, other nodes will receive the same data.
Implementation and configuration of glusterfs in centos 6.5