在CentOS 6.4上安裝配置GlusterFS的方法

來源:互聯網
上載者:User

環境介紹:
OS: CentOS 6.4 x86_64 Minimal
Servers: sc2-log1,sc2-log2,sc2-log3,sc2-log4
Client: sc2-ads15

具體步驟:
1. 在sc2-log{1-4}上安裝GlusterFS軟體包:

 代碼如下 複製代碼

# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum install -y glusterfs-3.4.2-1.el6 glusterfs-server-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6

# /etc/init.d/glusterd start
# chkconfig glusterfsd on

2. 在sc2-log1上配置整個GlusterFS叢集:

 代碼如下 複製代碼

[root@sc2-log1 ~]# gluster peer probe sc2-log1
1 peer probe: success: on localhost not needed

[root@sc2-log1 ~]# gluster peer probe sc2-log2
1 peer probe: success

[root@sc2-log1 ~]# gluster peer probe sc2-log3
1 peer probe: success

[root@sc2-log1 ~]# gluster peer probe sc2-log4
1 peer probe: success

[root@sc2-log1 ~]# gluster peer status
01 Number of Peers: 3
02 
03 Hostname: sc2-log2
04 Port: 24007
05 Uuid: 399973af-bae9-4326-9cbd-b5b05e5d2927
06 State: Peer in Cluster (Connected)
07 
08 Hostname: sc2-log3
09 Port: 24007
10 Uuid: 833a7b8d-e3b3-4099-baf9-416ee7213337
11 State: Peer in Cluster (Connected)
12 
13 Hostname: sc2-log4
14 Port: 24007
15 Uuid: 54bf115a-0119-4021-af80-7a6bca137fd9
16 State: Peer in Cluster (Connected)

3. 在sc2-log{1-4}上建立資料存放目錄:

 代碼如下 複製代碼
# mkdir -p /usr/local/share/{models,geoip,wurfl}
# ls -l /usr/local/share/
1 total 24
2 drwxr-xr-x   2 root root 4096 Apr  1 12:19 geoip
3 drwxr-xr-x   2 root root 4096 Apr  1 12:19 models
4 drwxr-xr-x   2 root root 4096 Apr  1 12:19 wurfl

4. 在sc2-log1上建立GlusterFS磁碟:

 代碼如下 複製代碼

[root@sc2-log1 ~]# gluster volume create models replica 4 sc2-log1:/usr/local/share/models sc2-log2:/usr/local/share/models sc2-log3:/usr/local/share/models sc2-log4:/usr/local/share/models force
1 volume create: models: success: please start the volume to access data

[root@sc2-log1 ~]# gluster volume create geoip replica 4 sc2-log1:/usr/local/share/geoip sc2-log2:/usr/local/share/geoip sc2-log3:/usr/local/share/geoip sc2-log4:/usr/local/share/geoip force
1 volume create: geoip: success: please start the volume to access data

[root@sc2-log1 ~]# gluster volume create wurfl replica 4 sc2-log1:/usr/local/share/wurfl sc2-log2:/usr/local/share/wurfl sc2-log3:/usr/local/share/wurfl sc2-log4:/usr/local/share/wurfl force
1 volume create: wurfl: success: please start the volume to access data

[root@sc2-log1 ~]# gluster volume start models
1 volume start: models: success

[root@sc2-log1 ~]# gluster volume start geoip
1 volume start: geoip: success

[root@sc2-log1 ~]# gluster volume start wurfl
1 volume start: wurfl: success

[root@sc2-log1 ~]# gluster volume info
01 Volume Name: models
02 Type: Replicate
03 Volume ID: b29b22bd-6d8c-45c0-b199-91fa5a76801f
04 Status: Started
05 Number of Bricks: 1 x 4 = 4
06 Transport-type: tcp
07 Bricks:
08 Brick1: sc2-log1:/usr/local/share/models
09 Brick2: sc2-log2:/usr/local/share/models
10 Brick3: sc2-log3:/usr/local/share/models
11 Brick4: sc2-log4:/usr/local/share/models
12  
13 Volume Name: geoip
14 Type: Replicate
15 Volume ID: 69b0caa8-7c23-4712-beae-6f536b1cffa3
16 Status: Started
17 Number of Bricks: 1 x 4 = 4
18 Transport-type: tcp
19 Bricks:
20 Brick1: sc2-log1:/usr/local/share/geoip
21 Brick2: sc2-log2:/usr/local/share/geoip
22 Brick3: sc2-log3:/usr/local/share/geoip
23 Brick4: sc2-log4:/usr/local/share/geoip
24  
25 Volume Name: wurfl
26 Type: Replicate
27 Volume ID: c723a99d-eeab-4865-819a-c0926cf7b88a
28 Status: Started
29 Number of Bricks: 1 x 4 = 4
30 Transport-type: tcp
31 Bricks:
32 Brick1: sc2-log1:/usr/local/share/wurfl
33 Brick2: sc2-log2:/usr/local/share/wurfl
34 Brick3: sc2-log3:/usr/local/share/wurfl
35 Brick4: sc2-log4:/usr/local/share/wurfl

5. 在sc2-ads15上部署用戶端並mount GlusterFS檔案系統:
[sc2-ads15][root@sc2-ads15 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
[sc2-ads15][root@sc2-ads15 ~]# yum install -y glusterfs-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6
[sc2-ads15][root@sc2-ads15 ~]# mkdir -p /mnt/{models,geoip,wurfl}
[sc2-ads15][root@sc2-ads15 ~]# mount -t glusterfs -o ro sc2-log3:models /mnt/models/
[sc2-ads15][root@sc2-ads15 ~]# mount -t glusterfs -o ro sc2-log3:geoip /mnt/geoip/
[sc2-ads15][root@sc2-ads15 ~]# mount -t glusterfs -o ro sc2-log3:wurfl /mnt/wurfl/
[sc2-ads15][root@sc2-ads15 ~]# df -h
1 Filesystem            Size  Used Avail Use% Mounted on
2 /dev/mapper/vg_t-lv_root
3                        59G  7.7G   48G  14% /
4 tmpfs                 3.9G     0  3.9G   0% /dev/shm
5 /dev/xvda1            485M   33M  428M   8% /boot
6 sc2-log3:models        98G  8.6G   85G  10% /mnt/models
7 sc2-log3:geoip         98G  8.6G   85G  10% /mnt/geoip
8 sc2-log3:wurfl         98G  8.6G   85G  10% /mnt/wurfl

6. 相關資料讀寫可用性測試:
在sc2-ads15掛載點上寫入資料:

 代碼如下 複製代碼
[sc2-ads15][root@sc2-ads15 ~]# umount /mnt/models
[sc2-ads15][root@sc2-ads15 ~]# mount -t glusterfs sc2-log3:models /mnt/models/
[sc2-ads15][root@sc2-ads15 ~]# echo "This is sc2-ads15" > /mnt/models/hello.txt
[sc2-ads15][root@sc2-ads15 ~]# mkdir /mnt/testdir
在sc2-log1資料目錄中進行查看:
[root@sc2-log1 ~]# ls /usr/local/share/models/
1 hello.txt testdir

結果: 資料寫入成功

在sc2-log1資料目錄中直接寫入資料:

 代碼如下 複製代碼
[root@sc2-log1 ~]# echo "This is sc2-log1" > /usr/local/share/models/hello.2.txt
[root@sc2-log1 ~]# mkdir /usr/local/share/models/test2
在sc2-ads15掛載點上進行查看:
[sc2-ads15][root@sc2-ads15 ~]# ls /mnt/models
[sc2-ads15][root@sc2-ads15 ~]# ls -l /mnt/models
1 hello.txt testdir

結果: 資料寫入失敗

在sc2-log1掛載點上寫入資料:

 代碼如下 複製代碼
[root@sc2-log1 ~]# mount -t glusterfs sc2-log1:models /mnt/models/
[root@sc2-log1 ~]# echo "This is sc2-log1" > /mnt/models/hello.3.txt
[root@sc2-log1 ~]# mkdir /mnt/models/test3
在sc2-ads15掛載點上進行查看:
[sc2-ads15][root@sc2-ads15 models]# ls /mnt/models
1 hello.2.txt  hello.3.txt hello.txt  test2  test3  testdir

結果: 資料寫入成功,同時之前寫入失敗的資料也成功載入了。

最終結論:
在資料目錄中直接寫入資料,會導致其它節點因為得不到通知而使資料同步失敗。
正確的做法是所有的讀寫操作都通過掛載點來進行。

7. 其它操作筆記:
刪除GlusterFS磁碟:

 代碼如下 複製代碼
# gluster volume stop models
# gluster volume delete models

卸載GlusterFS磁碟:

 代碼如下 複製代碼
# gluster peer detach sc2-log4

ACL存取控制:

 代碼如下 複製代碼
# gluster volume set models auth.allow 10.60.1.*,10.70.1.*

添加GlusterFS節點:

 代碼如下 複製代碼
# gluster peer probe sc2-log5
# gluster peer probe sc2-log6
# gluster volume add-brick models sc2-log5:/data/gluster sc2-log6:/data/gluster

遷移GlusterFS磁碟資料:

 代碼如下 複製代碼
# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start
# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models status
# gluster volume remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit

修複GlusterFS磁碟資料(例如在sc2-log1宕機的情況下):

 代碼如下 複製代碼
# gluster volume replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models commit -force
# gluster volume heal models full
相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.