標籤:blank target border volume style
- Gluster的幾種實現
- 1.striped volume 條帶卷的實現
- 650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_1412426738fiRW.png" "393" height="307" /> 如左圖,條帶卷內部的每一個檔案都被分成4份用來儲存,稱為條帶式儲存。
- 2、Replication
- 650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_1412426741wY2J.png" "397" height="305" /> 如左圖,條帶卷內部的每一個檔案在不同的server中各存一份。
- 3、Distributed Volume
- 650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_1412426744dbUU.png" "402" height="332" /> 如左圖,條帶卷內的各個檔案被分散到各節點進行儲存。
- 4、DHT
- 650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_1412426746viZf.png" "401" height="282" />
- 安裝Gluster
- 安裝環境,三個節點分別是
- 172.16.24.100 node1.libincla.com
- 172.16.24.101 node2.libincla.com
- 172.16.24.102 node3.libincla.com
- 在此的環境下已經準備好了Gluster需要的rpm包
- 這裡使用的如下方式
lftp 172.16.0.1:/pub/Sources/6.x86_64/glusterfs> cd ..
lftp 172.16.0.1:/pub/Sources/6.x86_64> mirror glusterfs
Total: 2 directories, 20 files, 0 symlinks
New: 20 files, 0 symlinks
10241416 bytes transferred in 2 seconds (4.07M/s)
lftp 172.16.0.1:/pub/Sources/6.x86_64>
- 因為這個目錄內含有repodata,可以編輯新的yum源來指向這個repodata。
- 如:[[email protected] glusterfs]# vim /etc/yum.repos.d/glusterfs.repo
[glfs]
name=glfs
baseurl=file:///root/glusterfs/
gpgcheck=0
- 使用yum repolist,列出repo查看是否生效
- 650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_1412426748tEQw.png" "598" height="192" /> 果然能夠被讀取出來。
- [[email protected] glusterfs]# yum install glusterfs-server 安裝gluster的伺服器包
- 650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_1412426751DeM2.png" "607" height="152" /> 這些需要依賴的包
- 安裝完成以後,將目錄分別scp到其他兩個節點上去,此處不再贅述。
- 如:
[[email protected] ~]# scp -r glusterfs/ node2:/root
The authenticity of host ‘node2 (172.16.24.101)‘ can‘t be established.
RSA key fingerprint is a4:8b:0f:37:a4:a0:31:59:6d:bb:56:52:23:f1:fc:4c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘node2,172.16.24.101‘ (RSA) to the list of known hosts.
[email protected]‘s password:
glusterfs-libs-3.5.2-1.el6.x86_64.rpm 100% 250KB 250.2KB/s 00:00
glusterfs-extra-xlators-3.5.2-1.el6.x86_64.rpm 100% 46KB 45.6KB/s 00:00
glusterfs-devel-3.5.2-1.el6.x86_64.rpm 100% 122KB 122.4KB/s 00:00
glusterfs-server-3.5.2-1.el6.x86_64.rpm 100% 563KB 563.3KB/s 00:00
glusterfs-fuse-3.5.2-1.el6.x86_64.rpm 100% 92KB 92.3KB/s 00:00
glusterfs-3.5.2-1.el6.x86_64.rpm 100% 1241KB 1.2MB/s 00:00
be1304a9b433e6427e2bbb7e1749c5005d7454e8f7a6731da1217e2f45a2d1da-primary.sqlite.bz2 100% 12KB 11.7KB/s 00:00
40b9dab11d26eb746c553eb57e6aa60d0ba9d1b66a935cb04c782815c05bf7c3-primary.xml.gz 100% 3785 3.7KB/s 00:00
repomd.xml 100% 2985 2.9KB/s 00:00
da5ee38acf32e0f9445a11d057b587634b874ce5a6dc3b900512f6afc133351f-filelists.xml.gz 100% 11KB 10.6KB/s 00:00
4e933e2701d32eac68670eeb53104b107bc8eac9879a199a1050852d7cf5994e-other.sqlite.bz2 100% 4417 4.3KB/s 00:00
5a4290490160367cedd0d349fb8719fb7485d6cd1c7fb03e0a9caace98a5610c-other.xml.gz 100% 1475 1.4KB/s 00:00
f9d4a1d6bfb573f34706f9cd84767959ec15599856f85b9d07a9310fda96b140-filelists.sqlite.bz2 100% 18KB 17.6KB/s 00:00
glusterfs-rdma-3.5.2-1.el6.x86_64.rpm 100% 49KB 49.5KB/s 00:00
glusterfs-geo-replication-3.5.2-1.el6.x86_64.rpm 100% 155KB 155.0KB/s 00:00
glusterfs-regression-tests-3.5.2-1.el6.x86_64.rpm 100% 114KB 114.1KB/s 00:00
glusterfs-api-3.5.2-1.el6.x86_64.rpm 100% 69KB 69.0KB/s 00:00
glusterfs-api-devel-3.5.2-1.el6.x86_64.rpm 100% 29KB 29.3KB/s 00:00
glusterfs-cli-3.5.2-1.el6.x86_64.rpm 100% 122KB 122.5KB/s 00:00
glusterfs-debuginfo-3.5.2-1.el6.x86_64.rpm 100% 7095KB 6.9MB/s 00:01
- 各節點分別建立所需要的目錄,這裡就叫gdata代表glusterfs的目錄。
- mkdir -pv /gdata/brick
- 然後就開始在各節點啟動服務
- service glusterd start
- 然後ss -tnlp查看哪個連接埠監聽?
[[email protected] ~]# ss -ntlp | grep glusterd
LISTEN 0 128 *:24007 *:* users:(("glusterd",35476,9))
[[email protected] ~]#
- 24007這個連接埠監聽。
- 可以用gluster peer probe <yourservername>可以探測各節點。
- [[email protected] ~]# gluster peer probe node2
peer probe: success.
[[email protected] ~]# gluster peer probe node3
peer probe: success.
[[email protected] ~]# 顯示都是成功的。
- 另外還可以使用命令 gluster peer status查看各狀態
[[email protected] ~]# gluster peer status
Number of Peers: 2
Hostname: node2
Uuid: 01f4c256-21a7-4b9b-a9ef-735182618bc6
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 70cd5f4e-e3db-401b-8849-ced76ac3f6c8
State: Peer in Cluster (Connected)
- 準備工作已然完成,需要建立GlusterFS的卷
- 分別在各節點上建立gv0
- mkdir /usr/gdata/brick/gv0 -pv
- 在任意節點上使用命令
[[email protected] ~]# gluster volume create gv0 replica 3 node1:/data/brick/gv0/ node2:/data/brick/gv0/ node3:/data/brick/gv0/ force
volume create: gv0: success: please start the volume to access data
- 這裡我加了一個force,這裡提醒大家如果想做Glusterfs最好在每個節點都提前準備好一個閒置磁碟。
- 此時使用Gluster volume start gv0啟動卷[[email protected] ~]# gluster volume start gv0
volume start: gv0: success
- 這裡使用mount命令,隨便找一個主機目錄掛到此節點,看看效果。
[[email protected] ~]# mount -t glusterfs node2:/gv0 /mnt/gluster/
[[email protected] ~]#
隨便存個檔案試試
650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_1412426753d1TG.png" "492" height="183" />
650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_1412426755GDNv.png" "496" height="262" />
650) this.width=650;" style="border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px" title="image" border="0" alt="image" src="http://img1.51cto.com/attachment/201410/4/6249823_14124267575xJt.png" "499" height="315" />
使用命令,能夠看到gv0資訊
[[email protected] gluster]# gluster volume info gv0
Volume Name: gv0
Type: Replicate
Volume ID: 44a87154-fc05-449b-bc98-266bc7673b23
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1:/data/brick/gv0
Brick2: node2:/data/brick/gv0
Brick3: node3:/data/brick/gv0
[[email protected] gluster]#
- 現在我們三個節點等雩都有了此裝置的副本,如果再其中一個節點儲存資料,其他節點應該就能收到同樣的資料。
GlusterFS in CentOS 6.5的實現和配置