在CentOS 6.4上安裝配置GlusterFS

來源:互聯網
上載者:User

標籤:

背景介紹:
項目目前在檔案同步方面採用的是rsync,在嘗試用Distributed File System替換的時候,使用過MooseFS,效果差強人意,在瞭解到了GlusterFS之後,決定嘗試一下,因為它跟MooseFS相比,感覺部署上更加簡單一些,同時沒有中繼資料服務器的特點使其沒有單點故障的存在,感覺非常不錯。

環境介紹:
OS: CentOS 6.4 x86_64 Minimal
Servers: sc2-log1,sc2-log2,sc2-log3,sc2-log4
Client: sc2-ads15

具體步驟:
1. 在sc2-log{1-4}上安裝GlusterFS軟體包:
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum install -y glusterfs-3.4.2-1.el6 glusterfs-server-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6

# /etc/init.d/glusterd start
# chkconfig glusterfsd on

2. 在sc2-log1上配置整個GlusterFS叢集:
[[email protected] ~]# gluster peer probe sc2-log1

peer probe: success: on localhost not needed

[[email protected] ~]# gluster peer probe sc2-log2

peer probe: success

[[email protected] ~]# gluster peer probe sc2-log3

peer probe: success

[[email protected] ~]# gluster peer probe sc2-log4

peer probe: success

[[email protected] ~]# gluster peer status

Number of Peers: 3Hostname: sc2-log2Port: 24007Uuid: 399973af-bae9-4326-9cbd-b5b05e5d2927State: Peer in Cluster (Connected)Hostname: sc2-log3Port: 24007Uuid: 833a7b8d-e3b3-4099-baf9-416ee7213337State: Peer in Cluster (Connected)Hostname: sc2-log4Port: 24007Uuid: 54bf115a-0119-4021-af80-7a6bca137fd9State: Peer in Cluster (Connected)

3. 在sc2-log{1-4}上建立資料存放目錄:
# mkdir -p /usr/local/share/{models,geoip,wurfl}
# ls -l /usr/local/share/

total 24drwxr-xr-x   2 root root 4096 Apr  1 12:19 geoipdrwxr-xr-x   2 root root 4096 Apr  1 12:19 modelsdrwxr-xr-x   2 root root 4096 Apr  1 12:19 wurfl

4. 在sc2-log1上建立GlusterFS磁碟:
[[email protected] ~]# gluster volume create models replica 4 sc2-log1:/usr/local/share/models sc2-log2:/usr/local/share/models sc2-log3:/usr/local/share/models sc2-log4:/usr/local/share/models force

volume create: models: success: please start the volume to access data

[[email protected] ~]# gluster volume create geoip replica 4 sc2-log1:/usr/local/share/geoip sc2-log2:/usr/local/share/geoip sc2-log3:/usr/local/share/geoip sc2-log4:/usr/local/share/geoip force

volume create: geoip: success: please start the volume to access data

[[email protected] ~]# gluster volume create wurfl replica 4 sc2-log1:/usr/local/share/wurfl sc2-log2:/usr/local/share/wurfl sc2-log3:/usr/local/share/wurfl sc2-log4:/usr/local/share/wurfl force

volume create: wurfl: success: please start the volume to access data

[[email protected] ~]# gluster volume start models

volume start: models: success

[[email protected] ~]# gluster volume start geoip

volume start: geoip: success

[[email protected] ~]# gluster volume start wurfl

volume start: wurfl: success

[[email protected] ~]# gluster volume info

Volume Name: modelsType: ReplicateVolume ID: b29b22bd-6d8c-45c0-b199-91fa5a76801fStatus: StartedNumber of Bricks: 1 x 4 = 4Transport-type: tcpBricks:Brick1: sc2-log1:/usr/local/share/modelsBrick2: sc2-log2:/usr/local/share/modelsBrick3: sc2-log3:/usr/local/share/modelsBrick4: sc2-log4:/usr/local/share/models Volume Name: geoipType: ReplicateVolume ID: 69b0caa8-7c23-4712-beae-6f536b1cffa3Status: StartedNumber of Bricks: 1 x 4 = 4Transport-type: tcpBricks:Brick1: sc2-log1:/usr/local/share/geoipBrick2: sc2-log2:/usr/local/share/geoipBrick3: sc2-log3:/usr/local/share/geoipBrick4: sc2-log4:/usr/local/share/geoip Volume Name: wurflType: ReplicateVolume ID: c723a99d-eeab-4865-819a-c0926cf7b88aStatus: StartedNumber of Bricks: 1 x 4 = 4Transport-type: tcpBricks:Brick1: sc2-log1:/usr/local/share/wurflBrick2: sc2-log2:/usr/local/share/wurflBrick3: sc2-log3:/usr/local/share/wurflBrick4: sc2-log4:/usr/local/share/wurfl

5. 在sc2-ads15上部署用戶端並mount GlusterFS檔案系統:
[sc2-ads15][[email protected] ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
[sc2-ads15][[email protected]15 ~]# yum install -y glusterfs-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6
[sc2-ads15][[email protected] ~]# mkdir -p /mnt/{models,geoip,wurfl}
[sc2-ads15][[email protected] ~]# mount -t glusterfs -o ro sc2-log3:models /mnt/models/
[sc2-ads15][[email protected] ~]# mount -t glusterfs -o ro sc2-log3:geoip /mnt/geoip/
[sc2-ads15][[email protected] ~]# mount -t glusterfs -o ro sc2-log3:wurfl /mnt/wurfl/
[sc2-ads15][[email protected] ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on/dev/mapper/vg_t-lv_root  59G  7.7G   48G  14% /tmpfs                 3.9G     0  3.9G   0% /dev/shm/dev/xvda1            485M   33M  428M   8% /bootsc2-log3:models        98G  8.6G   85G  10% /mnt/modelssc2-log3:geoip         98G  8.6G   85G  10% /mnt/geoipsc2-log3:wurfl         98G  8.6G   85G  10% /mnt/wurfl

6. 相關資料讀寫可用性測試:
在sc2-ads15掛載點上寫入資料:
[sc2-ads15][[email protected] ~]# umount /mnt/models
[sc2-ads15][[email protected] ~]# mount -t glusterfs sc2-log3:models /mnt/models/
[sc2-ads15][[email protected] ~]# echo "This is sc2-ads15" > /mnt/models/hello.txt
[sc2-ads15][[email protected] ~]# mkdir /mnt/testdir
在sc2-log1資料目錄中進行查看:
[[email protected] ~]# ls /usr/local/share/models/

hello.txt testdir

結果: 資料寫入成功

在sc2-log1資料目錄中直接寫入資料:
[[email protected] ~]# echo "This is sc2-log1" > /usr/local/share/models/hello.2.txt
[[email protected] ~]# mkdir /usr/local/share/models/test2
在sc2-ads15掛載點上進行查看:
[sc2-ads15][[email protected] ~]# ls /mnt/models
[sc2-ads15][[email protected] ~]# ls -l /mnt/models

hello.txt testdir

結果: 資料寫入失敗

在sc2-log1掛載點上寫入資料:
[[email protected] ~]# mount -t glusterfs sc2-log1:models /mnt/models/
[[email protected] ~]# echo "This is sc2-log1" > /mnt/models/hello.3.txt
[[email protected] ~]# mkdir /mnt/models/test3
在sc2-ads15掛載點上進行查看:
[sc2-ads15][[email protected] models]# ls /mnt/models

hello.2.txt  hello.3.txt hello.txt  test2  test3  testdir

結果: 資料寫入成功,同時之前寫入失敗的資料也成功載入了。

最終結論:
在資料目錄中直接寫入資料,會導致其它節點因為得不到通知而使資料同步失敗。
正確的做法是所有的讀寫操作都通過掛載點來進行。

7. 其它操作筆記:
刪除GlusterFS磁碟:
# gluster volume stop models
# gluster volume delete models

卸載GlusterFS磁碟:
# gluster peer detach sc2-log4

ACL存取控制:
# gluster volume set models auth.allow 10.60.1.*

添加GlusterFS節點:
# gluster peer probe sc2-log5
# gluster peer probe sc2-log6
# gluster volume add-brick models sc2-log5:/data/gluster sc2-log6:/data/gluster

遷移GlusterFS磁碟資料:
# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start
# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models status
# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit

修複GlusterFS磁碟資料(例如在sc2-log1宕機的情況下):
# gluster volume replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit -force
# gluster volume heal models full  

http://www.tuicool.com/articles/zQVZ7z

在CentOS 6.4上安裝配置GlusterFS

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.