Installation and use of Glusterfs

Source: Internet
Author: User
Tags glusterfs gluster

#######################################
##### Network Architecture #
#######################################
Two servers M1 M2
M1 is Glusterfs master server, IP is 192.168.1.138
M2 for Glusterfs hot standby server, IP is 192.168.1.139
M1 is also the client clients

(i) IP settings
Slightly

#######################################
# # # # Service Environment Installation # # #
#######################################
(i) Fuse installation
1 Check if Fuse is installed
[Email protected] ~]# Modprobe-l | grep Fuse #检查是否已经安装了fuse, execute this command if there is something output that jumps directly to the second step, if there is no output then the fuse is not installed.

Kernel/fs/fuse/fuse.ko
Kernel/fs/fuse/cuse.ko

2 Installing fuse
#tar-ZXVF fuse-2.7.4glfs11.tar.gz

#cd FUSE-2.7.4GLFS11
#./configure

#make
#checkinstall

(ii) Install Glusterfs (perform the same operation on M1,M2)
[Email protected] ~]# cd/data/software/
[Email protected] software]# TAR-ZXVF glusterfs-3.4.0.tar.gz
[Email protected] glusterfs-3.4.0]# CD glusterfs-3.4.0
[Email protected] glusterfs-3.4.0]#/configure--prefix=/data/apps/glusterfs
# If the following error occurs
Checking for flex ... no
Checking for Lex ... no
Configure:error:Flex or Lex required to build glusterfs.
workaround [[email protected] glusterfs-3.4.0]# yum-y Install Flex
# If the following error occurs
Configure:error:GNU Bison required to build Glusterfs.
workaround [[email protected] glusterfs-3.4.0]# yum-y Install Bison
[[email protected] glusterfs-3.4.0]# make
[[email protected] glusterfs-3.4.0]# make install

# Verify the installation is successful
[Email protected] glusterfs-3.4.0]# Ldconfig
[[email protected] glusterfs-3.4.0]#/home/data/apps/glusterfs/sbin/glusterfs--version # Output version information, the installation is successful

################################################
# # # # GLUSTERFS Environment Configuration and Client Mount #
################################################
One glusterfs start
[[Email protected] glusterfs-3.4.0]# Service Glusterd start (the same action is performed on M1,M2)

Two add firewall rules (m1,m2 do the same, notice the firewall rule order) Note: The firewall section and so on to do the test
[[email protected] glusterfs-3.4.0]# iptables-i input-p TCP--dport 24007-j ACCEPT

[[Email protected] glusterfs-3.4.0]# service Iptables Save

Three Add nodes
[Email protected] glusterfs-3.4.0]#/data/apps/glusterfs/sbin/gluster peer probe 192.168.1.139 (as long as on M1)

Four Create a replica volume
[[email protected] ~]# Mkdir/data/share (folder for storing data created on m1,m2)
[Email protected] ~]# Mkdir/data/share

[Email protected] ~]# chown-r Www:www/data/share
[Email protected] ~]# chown-r Www:www/data/share

[Email protected] ~]#/data/apps/glusterfs/sbin/gluster volume create bbs_img replica 2 192.168.1. {138,139}:/data/share
Volume Create:bbs_img:success:please start the volume to access data # Create a replica volume named bbs_img for mounting, stored separately in 192.168.1.138,192.168 .1.139 in the/data/share folder

[Email protected] ~]#/data/apps/glusterfs/sbin/gluster volume start bbs_img
Volume start:bbs_img:success (boot replica volume can be performed on M1 or M2)

Five mount
To create a mount point on a M1
[Email protected] ~]# Mkdir/data/wwwroot/web/share
[Email protected] ~]# chown-r Www:www/data/wwwroot/web/share
[[email protected] glusterfs]# Mount.glusterfs 192.168.1.138:/bbs_img/data/wwwroot/web/share # Here the client can mount any IP address, Can be 192.168.1.139, the data are the same

# We switch to www user created file
[Email protected] glusterfs]# Su-l www
[Email protected] share]$ echo "111111111111111" > 1.txt
[Email protected] share]$ echo "222222222222222" > 2.txt
# Note that we do not add content to the server, which can lead to content confusion and should be added directly to the mount point
# at this point both the M1 and the M2 have been created and the file description mount has succeeded.
[[email protected] share]$ Ll/data/share/# The owner of this folder will switch to root each time the uninstallation is mounted.
Total 16
-rw-rw-r--2 www www 9 01:50 1.txt
-rw-rw-r--2 www www 9 01:50 2.txt
[Email protected] ~]# ll/data/share/
Total 16
-rw-rw-r--2 www www 9 01:50 1.txt
-rw-rw-r--2 www www 9 01:50 2.txt


################################################
# # Glusterfs Management #
################################################

One-node management
1 Viewing node status
[Email protected] glusterfs]#/data/apps/glusterfs/sbin/gluster peer status
Number of Peers:1

hostname:192.168.1.139
Uuid:cd25b695-6266-4720-8f42-ffb34179b4fb
State:peer in Cluster (Connected)
2 Removing Nodes # Now we're going to remove M2 from the M1.
[Email protected] glusterfs]#/data/apps/glusterfs/sbin/gluster peer detach 192.168.1.139
Peer Detach:failed:Brick (s) with the peer 192.168.1.139 exist in cluster
# The node is not removed by default, we can force removal (it is not recommended to force the removal of the node)

[[email protected] share]#/data/apps/glusterfs/sbin/gluster peer detach 192.168.1.139 Force # Forced removal of nodes (not recommended)
Peer Detach:success
[[email protected] share]#/data/apps/glusterfs/sbin/gluster Peer status # View status again
Peer Status:no Peers present


3 Adding nodes
[Email protected] share]#/data/apps/glusterfs/sbin/gluster peer probe 192.168.1.139

Two volume management
1 Create a volume (here I only describe the replica volume)
[Email protected] ~]#/data/apps/glusterfs/sbin/gluster volume create bbs_img replica 2 192.168.1. {138,139}:/data/share

2 Viewing Volume information
[Email protected] share]#/data/apps/glusterfs/sbin/gluster Volume info

3 Viewing volume status
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume status

4 Start/Stop volume
Data/apps/glusterfs/sbin/gluster Volume Start/stop VolumeName
The client mounts the volume before it is mounted and unloads the client before the volume is stopped.
[Email protected] share]# Umount/home/data/wwwroot/web/share
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume Stop bbs_img


Three Brick Management
1 adding brick
In the case of a replica volume, the number of bricks added at one time is an integer multiple of replica
# Now we add 2 brick for bbs_img (front of our brick number is 2)
[Email protected] share]# Mkdir/data/share2
[Email protected] share]# chown Www:www/data/share2
[Email protected] share]# Mkdir/data/share2
[Email protected] share]# chown Www:www/data/share2
[Email protected] mnt]#/data/apps/glusterfs/sbin/gluster volume Add-brick bbs_img 192.168.1. {138,139}:/data/share2
Volume Add-brick:success # So it's going to be a function of expansion.

[Email protected] share]#/data/apps/glusterfs/sbin/gluster Volume info

Volume name:bbs_img
Type:distributed-replicate
Volume id:cacb4587-c9b4-4d38-84d1-99dbe2c28477
status:started
Number of Bricks:2 x 2 = 4
Transport-type:tcp
Bricks:
Brick1:192.168.1.138:/data/share
Brick2:192.168.1.139:/data/share
Brick3:192.168.1.138:/data/share2
Brick4:192.168.1.139:/data/share2
# When you expand, you can increase the system nodes and then add the brick on the new node.

2 removing Brick
For a replica volume, the number of bricks removed is an integer multiple of replica.
# Now we remove the two brick we just added
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume Remove-brick bbs_img 192.168.1. {138,139}:/data/share2 Start

# After execution begins removal, you can use the status command to remove the status view.
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume Remove-brick bbs_img 192.168.1. {138,139}:/data/share2 Status

# with the commit command to perform brick removal, no data migration is performed and the brick is deleted directly, meeting the needs of users who do not require data migration.
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume Remove-brick bbs_img 192.168.1. {138,139}:/data/share2 Commit

3 replacing Brick
We replaced the 192.168.1.139:/data/share with 192.168.1.138:/data/share2.
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume Replace-brick bbs_img 192.168.1.139:/data/share 192.168.1.138:/data/share2 start
If this error occurs the group Volume replace-brick:failed:/data/share2 or a prefix of it is already part of a volume description/data/share2 was once a BRI Ck. Specific Solutions
[Email protected] share]# Rm-rf/data/share2/.glusterfs
[Email protected] share]# setfattr-x Trusted.glusterfs.volume-id/data/share2
[Email protected] share]# setfattr-x Trusted.gfid/data/share2
As above, perform the Replcace-brick volume replacement start command, and start with the start command to begin migrating the raw brick data to the brick that will need to be replaced.

# See if you've finished replacing
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume Replace-brick bbs_img 192.168.1.139:/data/share 192.168.1.138:/data/share2 status

# During data migration, the abort command can be executed to terminate the brick substitution.
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume Replace-brick bbs_img 192.168.1.139:/data/share 192.168.1.138:/data/share2 Abort

# After the end of the data migration, execute the commit command to end the task, then make a brick replacement. Use the volume Info command to see that brick has been replaced.

[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume Replace-brick bbs_img 192.168.1.139:/data/share 192.168.1.138:/data/share2 Commit

# Now when we add data to the/data/wwwroot/web/share, the data will be synced to 192.168.1.138:/data/share and 192.168.1.138:/data/share2. Instead of syncing to the 192.168.1.139:/data/share.

Replace it here.
Here we execute on the M2
[Email protected] ~]# Rm-rf/data/share/.glusterfs
[Email protected] share]# setfattr-x Trusted.glusterfs.volume-id/data/share
[Email protected] share]# setfattr-x Trusted.gfid/data/share

[Email protected] data]#/data/apps/glusterfs/sbin/gluster volume Replace-brick bbs_img 192.168.1.138:/data/share2 192.168.1.139:/data/share start
[Email protected] data]#/data/apps/glusterfs/sbin/gluster volume Replace-brick bbs_img 192.168.1.138:/data/share2 192.168.1.139:/data/share status
[Email protected] data]#/data/apps/glusterfs/sbin/gluster volume Replace-brick bbs_img 192.168.1.138:/data/share2 192.168.1.139:/data/share Commit

[Email protected] data]# chown Www:www/data/share
[[email protected] share]# chown www:www/data/share (may change)
The file is then created on the server and then synced to 192.168.1.139:/data/share.

###################################
# # # Data Copy # #
###################################

A data copy
We synchronize a copy of the data to
192.168.1.138/data/share3 on
[Email protected] share]# Mkdir/data/share3
[Email protected] share]#/data/apps/glusterfs/sbin/gluster volume geo-replication bbs_img/data/share3 start
Starting geo-replication session between Bbs_img &/data/share3 has been successful
[Email protected] share]# Ll/data/share3
Total 28
-rw-rw-r--1 www www 9 01:50 1.txt
-rw-rw-r--1 www www 9 01:50 2.txt
..........................................
-rw-rw-r--1 www www. 9 04:05 7.txt

[[email protected] share]#/data/apps/glusterfs/sbin/gluster volume geo-replication bbs_img/data/share3 status # View sync status
NODE MASTER SLAVE STATUS
---------------------------------------------------------------------------------------------------
M1 Bbs_img/data/share3
Ok
[[email protected] share]#/data/apps/glusterfs/sbin/gluster volume geo-replication bbs_img/data/share3 Stop # Stop syncing if the same Step does not stop, it will always synchronize incrementally
Stopping geo-replication session between Bbs_img &/data/share3 has been successful

#####################################################
# Fault Handling #
#####################################################

One Mount server failure:
We are using Mount.glusterfs 192.168.1.138:/bbs_img/data/wwwroot/web/share this way to mount. What if the 192.168.1.138 (M1) network fails?
1 Manual switching
Steps:
Umount/data/wwwroot/web/share
Mount.glusterfs 192.168.1.139:/bbs_img/data/wwwroot/web/share # Re-mount to 192.168.1.139 (M2)
2 Use shell scripts to scan every 10s if there is a problem with the 192.168.1.138 server, re-mount and exit the program
[[email protected] data]# yum-y install nmap # installation port Scan Tool
[Email protected] shell]# vim gfs_keepalive.sh

#!/bin/bash# nmap 192.168.1.138 24007 port to determine if the port is open flag=0while ["$flag" = = "0"]dohostup=$ (/usr/bin/nmap-p 24007 192.16 8.1.138 | grep ' Host is Up ') portup=$ (/usr/bin/nmap-p 24007 192.168.1.138 | grep ' TCP open ') if ["$hostup" = = ""] | | ["$portup" = = ""];then# uninstall the service before re-mounting to another server/bin/umount/data/wwwroot/web/share/sbin/mount.glusterfs 192.168.1.139:/ Bbs_img/data/wwwroot/web/shareflag=1fi/bin/sleep 10done

Two data disturbance problems
# like 192.168.1.139:/data/share on the data. But the client data is not disturbed, then 192.168.1.139:/data/share will automatically synchronize the client's data

Three data recovery
If the client mistakenly deletes the data
1 Backing up client data
2 can replace the client's data directly with daily backups


Installation and use of Glusterfs

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.