Background Introduction:
The project is currently in the file synchronization with Rsync, in the attempt to replace the distributed file system, the use of moosefs, the effect is not satisfactory, after understanding the Glusterfs, decided to try, because it compared with moosefs, feel more simple deployment, At the same time there is no meta-data server features so that it does not have a single point of failure, it feels very good.
Environment Introduction:
Os:centos 6.4 x86_64 Minimal
Servers:sc2-log1,sc2-log2,sc2-log3,sc2-log4
Client:sc2-ads15
Specific steps:
1. Install the Glusterfs package on Sc2-log{1-4}:
# wget-p/ETC/YUM.REPOS.D Http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum install-y glusterfs-3.4.2-1.el6 glusterfs-server-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6
#/etc/init.d/glusterd Start
# Chkconfig GLUSTERFSD on
2. Configure the entire Glusterfs cluster on the SC2-LOG1:
[Email protected] ~]# Gluster peer probe Sc2-log1
Not needed
[Email protected] ~]# Gluster peer probe sc2-log2
Peer Probe:success
[Email protected] ~]# Gluster peer probe Sc2-log3
Peer Probe:success
[Email protected] ~]# Gluster peer probe Sc2-log4
Peer Probe:success
[Email protected] ~]# Gluster peer status
Number of Peers: 3
Hostname: sc2-log2
Port: 24007
Uuid: 399973af-bae9-4326-9cbd-b5b05e5d2927
State: Peer in Cluster (Connected)
Hostname: sc2-log3
Port: 24007
Uuid: 833a7b8d-e3b3-4099-baf9-416ee7213337
State: Peer in Cluster (Connected)
Hostname: sc2-log4
Port: 24007
Uuid: 54bf115a-0119-4021-af80-7a6bca137fd9
State: Peer in Cluster (Connected)
3. Create a data storage directory on Sc2-log{1-4}:
# Mkdir-p/USR/LOCAL/SHARE/{MODELS,GEOIP,WURFL}
# ls-l/usr/local/share/
total 24
drwxr-xr-x 2 root root 4096 Apr 1 12:19 geoip
drwxr-xr-x 2 root root 4096 Apr 1 12:19 models
drwxr-xr-x 2 root root 4096 Apr 1 12:19 wurfl
4. Create the Glusterfs disk on the SC2-LOG1:
[[email protected] ~]# Gluster volume create models replica 4 sc2-log1:/usr/local/share/models sc2-log2:/usr/local/share /models sc2-log3:/usr/local/share/models sc2-log4:/usr/local/share/models Force
To access data
[[email protected] ~]# gluster volume create GeoIP replica 4 Sc2-log1:/usr/local/share/geoip sc2-log2:/usr/local/share/ GeoIP Sc2-log3:/usr/local/share/geoip Sc2-log4:/usr/local/share/geoip Force
To access data
[[email protected] ~]# gluster volume create WURFL replica 4 SC2-LOG1:/USR/LOCAL/SHARE/WURFL sc2-log2:/usr/local/share/ WURFL SC2-LOG3:/USR/LOCAL/SHARE/WURFL SC2-LOG4:/USR/LOCAL/SHARE/WURFL Force
To access data
[Email protected] ~]# Gluster volume start models
Volume start:models:success
[Email protected] ~]# Gluster volume start GeoIP
Volume start:geoip:success
[Email protected] ~]# Gluster volume start WURFL
Volume start:wurfl:success
[Email protected] ~]# Gluster Volume info
Volume Name: models
Type: Replicate
Volume ID: b29b22bd-6d8c-45c0-b199-91fa5a76801f
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: sc2-log1:/usr/local/share/models
Brick2: sc2-log2:/usr/local/share/models
Brick3: sc2-log3:/usr/local/share/models
Brick4: sc2-log4:/usr/local/share/models
Volume Name: geoip
Type: Replicate
Volume ID: 69b0caa8-7c23-4712-beae-6f536b1cffa3
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: sc2-log1:/usr/local/share/geoip
Brick2: sc2-log2:/usr/local/share/geoip
Brick3: sc2-log3:/usr/local/share/geoip
Brick4: sc2-log4:/usr/local/share/geoip
Volume Name: wurfl
Type: Replicate
Volume ID: c723a99d-eeab-4865-819a-c0926cf7b88a
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: sc2-log1:/usr/local/share/wurfl
Brick2: sc2-log2:/usr/local/share/wurfl
Brick3: sc2-log3:/usr/local/share/wurfl
Brick4: sc2-log4:/usr/local/share/wurfl
5. Deploy the client on SC2-ADS15 and mount the Glusterfs file system:
[SC2-ADS15] [Email protected] ~]# wget-p/ETC/YUM.REPOS.D http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/ Glusterfs-epel.repo
[SC2-ADS15] [Email protected]15 ~]# Yum install-y glusterfs-3.4.2-1.el6 glusterfs-fuse-3.4.2-1.el6
[SC2-ADS15] [[email protected] ~]# mkdir-p/MNT/{MODELS,GEOIP,WURFL}
[SC2-ADS15] [Email protected] ~]# mount-t glusterfs-o ro sc2-log3:models/mnt/models/
[SC2-ADS15] [Email protected] ~]# mount-t glusterfs-o ro sc2-log3:geoip/mnt/geoip/
[SC2-ADS15] [Email protected] ~]# mount-t glusterfs-o ro sc2-log3:wurfl/mnt/wurfl/
[SC2-ADS15] [Email protected] ~]# df-h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_t-lv_root 59G 7.7G 48G 14% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/xvda1 485M 33M 428M 8% /boot
sc2-log3:models 98G 8.6G 85G 10% /mnt/models
sc2-log3:geoip 98G 8.6G 85G 10% /mnt/geoip
sc2-log3:wurfl 98G 8.6G 85G 10% /mnt/wurfl
6. Related data read and write usability test:
Write data on the SC2-ADS15 mount point:
[SC2-ADS15] [Email protected] ~]# Umount/mnt/models
[SC2-ADS15] [Email protected] ~]# mount-t glusterfs sc2-log3:models/mnt/models/
[SC2-ADS15] [Email protected] ~]# echo "This is Sc2-ads15" >/mnt/models/hello.txt
[SC2-ADS15] [Email protected] ~]# Mkdir/mnt/testdir
To view in the SC2-LOG1 Data catalog:
[Email protected] ~]# ls/usr/local/share/models/
Hello.txt TestDir
Result: Data Write success
Write data directly in the SC2-LOG1 data directory:
[Email protected] ~]# echo "This is Sc2-log1" >/usr/local/share/models/hello.2.txt
[Email protected] ~]# Mkdir/usr/local/share/models/test2
To view on SC2-ADS15 mount point:
[SC2-ADS15] [Email protected] ~]# Ls/mnt/models
[SC2-ADS15] [Email protected] ~]# ls-l/mnt/models
Hello.txt TestDir
Result: Data Write failed
Write data on the SC2-LOG1 mount point:
[Email protected] ~]# mount-t glusterfs sc2-log1:models/mnt/models/
[Email protected] ~]# echo "This is Sc2-log1" >/mnt/models/hello.3.txt
[Email protected] ~]# MKDIR/MNT/MODELS/TEST3
To view on SC2-ADS15 mount point:
[SC2-ADS15] [Email protected] models]# Ls/mnt/models
Hello.2.txt hello.3.txt hello.txt test2 test3 TestDir
The result: The data was written successfully, and the data that was previously written failed was successfully loaded.
Final conclusion:
Writing data directly in the data directory causes the other nodes to fail to synchronize data because they are not notified.
The correct approach is that all read and write operations are carried out through the mount point.
7. Other Operating notes:
To delete a Glusterfs disk:
# Gluster Volume Stop Models
# Gluster Volume Delete models
Uninstalling the Glusterfs disk:
# Gluster Peer Detach Sc2-log4
ACL access control:
# Gluster Volume set models auth.allow 10.60.1.*
To add a glusterfs node:
# Gluster peer probe Sc2-log5
# Gluster peer probe Sc2-log6
# Gluster Volume Add-brick models sc2-log5:/data/gluster Sc2-log6:/data/gluster
Migrating glusterfs disk Data:
# Gluster Volume Remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models start
# Gluster Volume Remove-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models Status
# Gluster Volume Remove-brick models sc2-log1:/usr/local/share/models Sc2-log5:/usr/local/share/models Commit
Repair Glusterfs disk data (for example, in the case of SC2-LOG1 outages):
# Gluster Volume Replace-brick models sc2-log1:/usr/local/share/models sc2-log5:/usr/local/share/models Commit-force
# Gluster Volume Heal models full
http://www.tuicool.com/articles/zQVZ7z
Installing the configuration on CentOS 6.4 glusterfs