Install and deploy Glusterfs in CentOS 6.5
Environment Introduction:
Server: 10.10.0.200 kvm200
Client1: 10.10.0.201kvm201
Client1: 10.10.0.202kvm202
1. Deploy the Server first
Host Name, network configuration, SELinux
[root@kvm200~]
#vi/etc/sysconfig/network
NETWORKING=
yes
HOSTNAME=kvm200.
test
.com
[root@kvm200~]
#vi/etc/idmapd.conf
[General]
#Verbosity=0#ThefollowingshouldbesettothelocalNFSv4domainname
#Thedefaultisthehost'sDNSdomainname.
Domain=
test
.com
[root@kvm200~]
#hostnamekvm200.test.com
kvm200.
test
.com
[root@kvm200~]
#hostname--fqdn
kvm200.
test
.com
If -- fqdn can obtain the complete domain name, it indicates that the host is successfully configured.
2. Disable SELinux
[root@kvm200network-scripts]
#vi/etc/sysconfig/selinux
#ThisfilecontrolsthestateofSELinuxonthesystem.
#SELINUX=cantakeoneofthesethreevalues:
#enforcing-SELinuxsecuritypolicyisenforced.
#permissive-SELinuxprintswarningsinsteadofenforcing.
#disabled-SELinuxisfullydisabled.
SELINUX=disabled
# You can set this parameter to permissive or disable.
#SELINUXTYPE=typeofpolicyinuse.Possiblevaluesare:
#targeted-Onlytargetednetworkdaemonsareprotected.
#strict-FullSELinuxprotection.
SELINUXTYPE=targeted
[root@kvm200network-scripts]
# Setenforce0 # effective immediately
Client1 and Client2 set the Host Name and Seliux
3. NTP server
If your server can connect to the network and synchronize time, skip this step. In a local environment, an NTP server is required to synchronize the time between hosts.
Set kvm200 as an NTP server.
root@kvm200network-scripts]
#vi/etc/ntp.conf
#Permitallaccessovertheloopbackinterface.Thiscould
#betightenedaswell,buttodosowouldeffectsomeof
#theadministrativefunctions.
restrict127.0.0.1restrict-6::1restrict0.0.0.0mask0.0.0.0nomodifynotrapnoquery
restrict192.168.166.0mask255.255.255.0nomodify
#Hostsonlocalnetworkarelessrestricted.
#restrict192.168.1.0mask255.255.255.0nomodifynotrap
#Usepublicserversfromthepool.ntp.orgproject.
#Pleaseconsiderjoiningthepool(http://www.pool.ntp.org/join.html).
server0.centos.pool.ntp.orgiburst
server1.centos.pool.ntp.orgiburst
server2.centos.pool.ntp.orgiburst
server3.centos.pool.ntp.orgiburstserver210.72.145.44prefer
server127.127.1.0
fudge127.127.1.0stratum8
# The font of the red part is newly added. Save and restart the ntp service.
[root@kvm200~]
#servicentpdrestart
Shuttingdownntpd:[OK]
Startingntpd:[OK]
Modify the configuration file on the client and synchronize it manually:
[root@kvm201~]
#vi/etc/ntp.conf
......
#Usepublicserversfromthepool.ntp.orgproject.
#Pleaseconsiderjoiningthepool(http://www.pool.ntp.org/join.html).
#server0.centos.pool.ntp.orgiburst
#server1.centos.pool.ntp.orgiburst
#server2.centos.pool.ntp.orgiburst
# Server3.centos. pool. ntp. orgiburst192.168.30.200iburst # the IP address of the ntp server. Communication is required. If the communication fails, it is the server's iptables problem. We will open the iptables port later.
[root@kvm201~]
#servicentpdrestart
Shuttingdownntpd:[OK]
Startingntpd:[OK]
[root@kvm201~]
# Ntpdate-u192.168.30.200 # manual synchronization time, or wait for the system to automatically synchronize, but the wait time may be longer, the synchronization mechanism can be Google NTP server Introduction
After the NTP server is built. The Basic Environment configuration has been completed. The following describes how to install the storage server,
4. Set IPtables firewall and modify the mount file and create a mount folder
4.1 modifySet each firewall
root@kvm200~]
#iptables-IINPUT1-s10.10.0.0/16-jACCEPT
[root@kvm200~]
#serviceiptablessave
[root@kvm200~]
#serviceiptablesrestart
4.2 modify the mount file:/etc/fstab add the last two lines to the file (this is planned during the construction and will be used to mount the storage system in the future)
[root@kvm200~]
#vi/etc/fstab
10.12.0.200:
/Mian
/primary
glusterfsdefaults,_netdev00
4.3 Storage Server
Before setting up the Gluster server, we first configure the Gluster source and then download the installation package.
[root@kvm200~]
#cd/etc/yum.repos.d/
[root@kvm200yum.repos.d]
#wgethttp://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/glusterfs-epel.repo
[root@kvm200yum.repos.d]
#cd~
[root@kvm200~]
#wgethttp://download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm
[root@kvm200~]
#rpm-ivhepel-release-latest-6.noarch.rpm
[root@kvm200~]
#yum-yinstallglusterfs-server
[root@kvm200~]
#/etc/init.d/glusterfsdrestart
After the installation is complete, install the other two clients as follows :(Note: The other three must be installed first to continue. Major installation: glusterfs-server glusterfs-fuse three software packages)
[root@kvm201~]
#cd/etc/yum.repos.d/[root@kvm201yum.repos.d]#wget-P
[root@kvm201yum.repos.d]
#cd~[root@kvm201~]#wgetftp://ftp.pbone.net/mirror/dl.iuscommunity.org/pub/ius/stable/CentOS/6/x86_64/epel-release-6-5.noarch.rpm
[root@kvm201~]
#rpm-ivhepel-release-6-8.noarch.rpm
[root@kvm201~]
#yuminstallglusterfsglusterfs-fuseglusterfs-server
[root@kvm200~]
#/etc/init.d/glusterfsdrestart
After completing the above steps, we will return to the kvm200 machine, add them (200, 201) to the gluster node, and create the storage volume:
[root@kvm200~]
#glusterpeerprobe10.12.0.201
[root@kvm200~]
#glusterpeerprobe10.12.0.202
[root@kvm200~]
#glustervolumecreateMianstripe210.12.0.201:/storage10.12.0.202:/storageforce
[root@kvm200~]
#glustervolumestartMian
(If an error is reported when a node is added, it may be related to the firewall. Check the firewall settings of each host .)
View status:
[root@kvm200~]
#glustervolumestatus
Statusofvolume:Mian
GlusterprocessTCPPortRDMAPortOnlinePid
------------------------------------------------------------------------------
Brick10.12.0.200:
/storage
491520Y3512
Brick10.12.0.201:
/storage
491520Y3417
TaskStatusofVolumeMian
------------------------------------------------------------------------------
Therearenoactivevolumetasks
[root@kvm200~]
#glusterpeerstatus
NumberofPeers:4Hostname:10.12.0.204
Uuid:23194295-4168-46d5-a0b3-8f06766c58b4
State:Peer
in
Cluster(Connected)
Hostname:10.12.0.202
Uuid:de10fd85-7b85-4f28-970b-339977a0bcf6
State:Peer
in
Cluster(Connected)
Hostname:10.12.0.201
Uuid:0cd18fe2-62dd-457a-9365-a7c2c1c5c4b2
State:Peer
in
Cluster(Connected)
Hostname:10.12.0.203
Uuid:d160b7c3-89de-4169-b04d-bb18712d75c5
State:Peer
in
Cluster(Connected)
By now, all Glusterfs has been deployed and mounted on the machine that needs to be mounted!