How to install Glusterfs on CentOS 6.4

Source: Internet
Author: User
Tags commit geoip mkdir uuid centos glusterfs gluster


Environment Introduction:
Os:centos 6.4 x86_64 Minimal
Servers:sc2-log1,sc2-log2,sc2-log3,sc2-log4
Client:sc2-ads15



Specific steps:
1. Install the Glusterfs package on Sc2-log{1-4}:


The code is as follows

# wget-p/ETC/YUM.REPOS.D Http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum install-y glusterfs-3.4.2-1.el6 glusterfs-server-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6

#/etc/init.d/glusterd Start
# Chkconfig GLUSTERFSD on


2. Configure the entire Glusterfs cluster on the SC2-LOG1:


The code is as follows


[Root@sc2-log1 ~]# Gluster peer probe Sc2-log1
1 peer probe:success:on localhost not needed



[Root@sc2-log1 ~]# Gluster peer probe sc2-log2
1 Peer Probe:success



[Root@sc2-log1 ~]# Gluster peer probe Sc2-log3
1 Peer Probe:success



[Root@sc2-log1 ~]# Gluster peer probe Sc2-log4
1 Peer Probe:success



[Root@sc2-log1 ~]# Gluster Peer status
Number of Peers:3
02
Hostname:sc2-log2
port:24007
uuid:399973af-bae9-4326-9cbd-b5b05e5d2927
State:peer in Cluster (Connected)
07
Hostname:sc2-log3
port:24007
Ten uuid:833a7b8d-e3b3-4099-baf9-416ee7213337
One state:peer in Cluster (Connected)
12
Hostname:sc2-log4
port:24007
Uuid:54bf115a-0119-4021-af80-7a6bca137fd9
State:peer in Cluster (Connected)



3. Create a data storage directory on Sc2-log{1-4}:


The code is as follows
# Mkdir-p/USR/LOCAL/SHARE/{MODELS,GEOIP,WURFL}
# ls-l/usr/local/share/
1 Total 24
2 drwxr-xr-x 2 root root 4096 Apr 1 12:19 GeoIP
3 drwxr-xr-x 2 root root 4096 APR 1 12:19 Models
4 Drwxr-xr-x 2 root root 4096 Apr 1 12:19 WURFL


4. Create the Glusterfs disk on the SC2-LOG1:


The code is as follows


[Root@sc2-log1 ~]# gluster Volume create models replica 4 sc2-log1:/usr/local/share/models sc2-log2:/usr/local/share/ Models Sc2-log3:/usr/local/share/models sc2-log4:/usr/local/share/models Force
1 volume create:models:success:please start the volume to access data



[Root@sc2-log1 ~]# gluster Volume create GeoIP replica 4 Sc2-log1:/usr/local/share/geoip sc2-log2:/usr/local/share/ GeoIP Sc2-log3:/usr/local/share/geoip Sc2-log4:/usr/local/share/geoip Force
1 volume create:geoip:success:please start the volume to access data



[Root@sc2-log1 ~]# gluster Volume create WURFL replica 4 SC2-LOG1:/USR/LOCAL/SHARE/WURFL sc2-log2:/usr/local/share/ WURFL SC2-LOG3:/USR/LOCAL/SHARE/WURFL SC2-LOG4:/USR/LOCAL/SHARE/WURFL Force
1 volume create:wurfl:success:please start the volume to access data



[Root@sc2-log1 ~]# gluster Volume start models
1 volume start:models:success



[Root@sc2-log1 ~]# gluster Volume start GeoIP
1 volume start:geoip:success



[Root@sc2-log1 ~]# gluster Volume start WURFL
1 volume start:wurfl:success


[Root@sc2-log1 ~]# Gluster Volume info
Volume Name:models
Type:replicate
Volume id:b29b22bd-6d8c-45c0-b199-91fa5a76801f
status:started
Number of bricks:1 x 4 = 4
Modified TRANSPORT-TYPE:TCP
Modified Bricks:
Brick1:sc2-log1:/usr/local/share/models
Brick2:sc2-log2:/usr/local/share/models
Ten Brick3:sc2-log3:/usr/local/share/models
One brick4:sc2-log4:/usr/local/share/models
12
Volume Name:geoip
Type:replicate
Volume Id:69b0caa8-7c23-4712-beae-6f536b1cffa3
status:started
Number of bricks:1 x 4 = 4
Transport-type:tcp
Bricks:
Brick1:sc2-log1:/usr/local/share/geoip
Brick2:sc2-log2:/usr/local/share/geoip
Brick3:sc2-log3:/usr/local/share/geoip
Brick4:sc2-log4:/usr/local/share/geoip
24
Volume NAME:WURFL
Num type:replicate
Volume id:c723a99d-eeab-4865-819a-c0926cf7b88a
status:started
Number of bricks:1 x 4 = 4
Transport-type:tcp
Bricks:
Brick1:sc2-log1:/usr/local/share/wurfl
Brick2:sc2-log2:/usr/local/share/wurfl
Brick3:sc2-log3:/usr/local/share/wurfl
Km BRICK4:SC2-LOG4:/USR/LOCAL/SHARE/WURFL

5. Deploy the client on the SC2-ADS15 and mount the Glusterfs file system:
[SC2-ADS15] [Root@sc2-ads15 ~]# wget-p/etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/ Glusterfs-epel.repo
[SC2-ADS15] [Root@sc2-ads15 ~]# yum install-y glusterfs-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6
[SC2-ADS15] [Root@sc2-ads15 ~]# mkdir-p/MNT/{MODELS,GEOIP,WURFL}
[SC2-ADS15] [Root@sc2-ads15 ~]# mount-t glusterfs-o ro sc2-log3:models/mnt/models/
[SC2-ADS15] [Root@sc2-ads15 ~]# mount-t glusterfs-o ro sc2-log3:geoip/mnt/geoip/
[SC2-ADS15] [Root@sc2-ads15 ~]# mount-t glusterfs-o ro sc2-log3:wurfl/mnt/wurfl/
[SC2-ADS15] [Root@sc2-ads15 ~]# Df-h
1 filesystem Size Used avail use% mounted on
2/dev/mapper/vg_t-lv_root
3 59G 7.7G 48G 14%/
4 tmpfs 3.9G 0 3.9G 0%/dev/shm
5/DEV/XVDA1 485M 33M 428M 8%/boot
6 sc2-log3:models 98G 8.6G 85G 10%/mnt/models
7 Sc2-log3:geoip 98G 8.6G 85G 10%/mnt/geoip
8 SC2-LOG3:WURFL 98G 8.6G 85G 10%/MNT/WURFL


6. Relevant data reading and writing usability test:
To write data on a SC2-ADS15 mount point:


The code is as follows
[SC2-ADS15] [Root@sc2-ads15 ~]# Umount/mnt/models
[SC2-ADS15] [Root@sc2-ads15 ~]# mount-t glusterfs sc2-log3:models/mnt/models/
[SC2-ADS15] [Root@sc2-ads15 ~]# Echo ' This is Sc2-ads15 ' >/mnt/models/hello.txt
[SC2-ADS15] [Root@sc2-ads15 ~]# Mkdir/mnt/testdir
To view in the SC2-LOG1 Data directory:
[Root@sc2-log1 ~]# ls/usr/local/share/models/
1 Hello.txt TestDir


Result: Data Write succeeded



Write data directly in the SC2-LOG1 data directory:


The code is as follows
[Root@sc2-log1 ~]# Echo ' This is Sc2-log1 ' >/usr/local/share/models/hello.2.txt
[Root@sc2-log1 ~]# Mkdir/usr/local/share/models/test2
To view at the SC2-ADS15 mount point:
[SC2-ADS15] [Root@sc2-ads15 ~]# Ls/mnt/models
[SC2-ADS15] [Root@sc2-ads15 ~]# ls-l/mnt/models
1 Hello.txt TestDir


Result: Data Write failed



To write data on a sc2-log1 mount point:


The code is as follows
[Root@sc2-log1 ~]# mount-t glusterfs sc2-log1:models/mnt/models/
[Root@sc2-log1 ~]# Echo ' This is Sc2-log1 ' >/mnt/models/hello.3.txt
[Root@sc2-log1 ~]# Mkdir/mnt/models/test3
To view at the SC2-ADS15 mount point:
[SC2-ADS15] [Root@sc2-ads15 models]# Ls/mnt/models
1 hello.2.txt hello.3.txt hello.txt test2 test3 TestDir


Result: The data was written successfully, and the data written previously failed was successfully loaded.



Final conclusion:
Writing data directly in the data directory causes other nodes to fail the data synchronization because they are not notified.
The correct approach is that all of the read and write operations are carried out through mount points.



7. Other action notes:
To delete a Glusterfs disk:


The code is as follows
# Gluster Volume Stop Models
# Gluster Volume Delete models


Uninstall Glusterfs Disk:


The code is as follows
# Gluster Peer Detach Sc2-log4


ACL access control:


The code is as follows
# Gluster Volume set models auth.allow 10.60.1.*,10.70.1.*


Add Glusterfs node:


The code is as follows
# Gluster peer probe Sc2-log5
# Gluster peer probe Sc2-log6
# Gluster Volume Add-brick models sc2-log5:/data/gluster Sc2-log6:/data/gluster


Migrating glusterfs disk Data:


The code is as follows
# Gluster Volume Remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start
# Gluster Volume Remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models Status
# Gluster Volume Remove-brick models sc2-log1:/usr/local/share/models Sc2-log5:/usr/local/share/models Commit


Repair Glusterfs disk data (for example, in the case of SC2-LOG1 downtime):


  code is as follows
# gluster Volume Replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models Commit-force
# Gluster Volume Heal models full
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.