One: Experimental environment
node |
os |
ip |
san_ip |
VIP |
node1 |
rhel 6.5 |
192.168.10.11 |
172.16.1.1 |
192.168.10.100 |
node2 |
Rhel 6.5 |
192.168.10.12 |
172.16.1.2 |
san |
rhel 6.5 |
|
172.16.1.3 |
|
/table>
Note:
The concept of 1.corosync and pacemaker is not said here, there is a lot of information on the Internet
2. where the two-node IP address has been set as well
3. The San is connected ( map Local drive letter /dev/sdb)
4. Two nodes have been configured with mutual SSH Trust and have done time synchronization
Two: Install related software (both nodes 1 and 2 are installed)
1. Install corosync,pacemaker
[[email protected] ~]# for I in 1 2; Do ssh node$i yum-y install corosync* pacemaker*; Done
2. Installing Crmsh
Download Crmsh,pssh,python-pssh at the address below
http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/x86_64/
The versions of this download are:
crmsh-2.1-1.6.x86_64.rpm
pssh-2.3.1-4.1.x86_64.rpm
python-pssh-2.3.1-4.1.x86_64.rpm
Installation:
[[Email Protected]node1 ~]# for i in 1 2; Do ssh node$i yum-y--nogpgcheck localinstall/root/*.rpm; Done
3. Installing Apache
[[email protected] ~]# for I in 1 2; Do ssh node$i yum-y install httpd; Done
[[email protected] ~]# for I in 1 2; Do ssh node$i chkconfig httpd off; Done
Three: Configuration Corosync
1.[[email protected] ~]# cd/etc/corosync/
2.[[email protected] corosync]# CP corosync.conf.example corosync.conf
3. the completed configuration file is as follows:
[email protected] corosync]# cat corosync.conf
# Please read the corosync.conf.5 manualpage
Compatibility:whitetank
Totem {
Version:2
Secauth:off
threads:0
interface {
ringnumber:0
bindnetaddr:192.168.10.0//on which network segment to multicast, according to the actual situation to modify
mcastaddr:226.94.1.1
mcastport:5405
Ttl:1
}
}
Logging {
Fileline:off
To_stderr:no
To_logfile:yes
To_syslog:no
LogFile:/var/log/cluster/corosync.log//Log location
Debug:off
Timestamp:on
Logger_subsys {
Subsys:amf
Debug:off
}
}
AMF {
Mode:disabled
}
#
# Here is the add section
Service {
ver:0
Name:pacemaker//Start corosync at the same time, start the pacemaker
}
aisexec {
User:root
Group:root
}
4. Copy the configuration file to node2
[Email protected] corosync]# SCP corosync.conf node2:/etc/corosync/
5. start the Corosync service
[[email protected] ~]#/etc/init.d/corosync start
Starting Corosync Cluster Engine (Corosync): [OK]
[[email protected] ~]# ssh node2 "/etc/init.d/corosync start"
Starting Corosync Cluster Engine (Corosync): [OK]
6. Set corosync random Start
[[email protected] ~]# for I in 1 2; Do ssh node$i chkconfig corosync on; Done
Quad: Cluster service configuration
1. View the current cluster status
[[email protected] ~]# CRM status
Last Updated:tue June 23 15:28:58 2015
Last Change:tue June 15:23:58 via CRMD on Node1
Stack:classic Openais (with plugin)
Current dc:node1-partition with Quorum
version:1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [Node1 Node2]
from the above, nodes 1 and 2 are all online and no resources have been configured
2. Set cluster properties
[[email protected] ~]# CRM Configure
CRM (Live) configure# property Stonith-enabled=false//disable Stonith device
CRM (Live) configure# property No-quorum-policy=ignore//The policy to not reach the legal votes is ignored
CRM (Live) configure# verify
CRM (Live) configure# commit
CRM (live) configure# Show
Node Node1
Node Node2
Property Cib-bootstrap-options: \
dc-version=1.1.10-14.el6-368c726 \
Cluster-infrastructure= "Classic Openais (with plugin)" \
expected-quorum-votes=2 \
Stonith-enabled=false \
No-quorum-policy=ignore
3. Add a file system (Filesystem) resource
CRM (Live) configure# primitive webstore Ocf:heartbeat:Filesystem params \
> device=/dev/sdb1 directory=/var/www/html fstype=xfs \
> op start timeout=60 \
> Op Stop timeout=60
CRM (Live) configure# verify
do not commit, then set the resource Wetstore first run on the Node1 node
CRM (Live) configure# location Webstore_perfer_node1 webstore 50:node1
CRM (Live) configure# verify
Submit Now
CRM (Live) configure# commit
Go back to the previous level to see the current cluster status
CRM (live) configure# CD
CRM (live) # Status
Last Updated:tue June 23 15:55:03 2015
Last Change:tue June 15:54:14 via Cibadmin on Node1
Stack:classic Openais (with plugin)
Current dc:node1-partition with Quorum
version:1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
1 Resources configured
Online: [Node1 Node2]
Webstore (Ocf::heartbeat:filesystem): Started node1
from the above,Webstore is currently running on Node1 .
4. Add httpd Service resource and set httpd service must be togetherwith Webstore, Webstore must be started first,httpd Service to start
CRM (Live) configure# primitive httpd lsb:httpd
CRM (Live) configure# colocation httpd_with_httpd inf:httpd webstore
CRM (Live) configure# order webstore_before_httpd Mandatory:webstore:start httpd
CRM (Live) configure# verify
CRM (Live) configure# commit
CRM (live) configure# CD
CRM (live) # Status
Last Updated:tue June 23 15:58:53 2015
Last Change:tue June 15:58:46 via Cibadmin on Node1
Stack:classic Openais (with plugin)
Current dc:node1-partition with Quorum
version:1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [Node1 Node2]
Webstore (Ocf::heartbeat:filesystem): Started node1
HTTPD (LSB:HTTPD): Started node1
5. Add the virtual IP resource and set the virtual IP must be together with the httpd serviceto start the virtual IP after the HTTPD service is started
CRM (Live) configure# primitive Webip ocf:heartbeat:IPaddr params \
> ip=192.168.10.100 Nic=eth0
CRM (Live) configure# colocation webip_with_httpd Inf:webip httpd
CRM (Live) configure# order Httpd_before_webip mandatory:httpd Webip
CRM (Live) configure# verify
CRM (Live) configure# commit
CRM (live) configure# CD
CRM (live) # Status
Last Updated:tue June 23 16:02:03 2015
Last Change:tue June 16:01:54 via Cibadmin on Node1
Stack:classic Openais (with plugin)
Current dc:node1-partition with Quorum
version:1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configured
Online: [Node1 Node2]
Webstore (Ocf::heartbeat:filesystem): Started node1
HTTPD (LSB:HTTPD): Started node1
Webip (OCF::HEARTBEAT:IPADDR): Started node1
Five: Highly Available test
1. after Node1 offline, view the cluster status
[[email protected] ~]# CRM node standby
[[email protected] ~]# CRM status
Last Updated:tue June 23 16:05:40 2015
Last change:tue June 16:05:37 Viacrm_attribute on Node1
Stack:classic Openais (with plugin)
Current dc:node1-partition with Quorum
version:1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configured
Node Node1:standby
Online: [Node2]
Webstore (Ocf::heartbeat:filesystem): Started node2
HTTPD (LSB:HTTPD): Started node2
Webip (OCF::HEARTBEAT:IPADDR): Started node2
from the above, the resources are switched to the Node2.
2. bring Node1 back online
[[email protected] ~]# CRM node online
[[email protected] ~]# CRM status
Last Updated:tue June 23 16:06:43 2015
Last change:tue June 16:06:40 Viacrm_attribute on Node1
Stack:classic Openais (with plugin)
Current dc:node1-partition with Quorum
version:1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
3 Resources configured
Online: [Node1 Node2]
Webstore (Ocf::heartbeat:filesystem): Started node1
HTTPD (LSB:HTTPD): Started node1
Webip (OCF::HEARTBEAT:IPADDR): Started node1
from the above, the resources are back to Node1, which is in line with the priority we set to run on Node1.
at this point a simple web high-availability configuration is complete
This article from "Never Stop" blog, declined reprint!
Corosync+pacemaker enabling high availability of Web services