Enable high availability of Web services based on Corosync and pacemaker

Source: Internet
Author: User

corosync+pacemaker+iscsi+httpd Implement Web High availability of services


First, the introduction of software

Corosync implemented is the Membership and Reliable Group Communication protocol

Pacemaker is based on corosync/linux-ha Implement service Management


Corosync includes the following components:

Totem Protocol

EVS

CPG

CFG

Quorum

Extended Virtual synchrony algorithm (EVS) two features available:

Synchronization of group membership lists;

Reliable multicast for group messages.

Pacemaker Hierarchical Architecture

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/45/D5/wKioL1PsOx3TLW7iAAIX50voUU8701.jpg "style=" float: none; "title=" Screenshot-1.png "alt=" Wkiol1psox3tlw7iaaix50vouu8701.jpg "/>


Pacemaker itself consists of four key components :

CIB ( cluster information base )

CRMD ( cluster resource management daemon )

Pengine (PE or policy engine )

STONITHD (Shoot-the-other-node-in-the-head to explode the head of other nodes )


650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/45/D4/wKiom1PsOgXhRkbbAAG4Wkdkpk4168.jpg "style=" float: none; "title=" Screenshot-2.png "alt=" Wkiom1psogxhrkbbaag4wkdkpk4168.jpg "/>

Internal component structure

Second, the Software Installation and Web service implementation


Lab Environment: rhel6.5

KVM Virtual Authority closed firewall SELinux disabled

Two nodes node1.com node4.com


Yum Install Corosync pacemaker-y


The Yum Warehouse does not provide a CRM command to install itself

Package Name:crmsh-1.2.6-0.rc2.2.1.x86_64.rpm,pssh-2.3.1-2.1.x86_64.rpm

pssh-2.3.1-4.1.x86_64.rpm,python-pssh-2.3.1-4.1.x86_64.rpm

install Crmsh dependent and pssh package,pssh if choose 2.3.1-4.1 also need to install python-pssh-2.3.1-4.1


Configuration Corosync

vi /etc/corosync/corosync.conftotem {version: 2secauth: offthreads: 0interface { ringnumber: 0bindnetaddr: 192.168.122.0       // Network segment mcastaddr: 226.44.1.1            //set up as an experimental environment The cluster uses multicasting, and each node needs a consistent mcastport: 5405ttl: 1}}logging {fileline: offto_stderr: noto_logfile:  yesto_syslog: yeslogfile:/var/log/cluster/corosync.logdebug: offtimestamp: onlogger_ subsys {subsys: amfdebug: off}}amf {mode: disabled}service {                     //Add Name : pacemakerver: 0}//when run in version 1 mode, the plugin  Does not start the pacemaker daemons.  when run in version  0mode, the plugin Can start the pacemaker daemons. 


Configure the same on two nodes

/etc/init.d/corosync Start// start corosync service

using crm_verify-lv to Verify the configuration file First

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/45/D4/wKiom1PsPXmjPBKEAAECz7ZdOnk842.jpg "title=" Screenshot-3.png "alt=" Wkiom1pspxmjpbkeaaecz7zdonk842.jpg "/>


Pacemaker is enabled by default , and thecurrent cluster does not have a corresponding stonith device, so this default configuration is not yet available, which can be disabled by the following command : Stonith

CRM (Live) Configure#crm Configureproperty Stonith-enabled=false

then use the CRM_VERIFY-LV command to view the error, a node configuration all node synchronization

Add a resource VIP

CRM (Live) configure# primitive VIP OCF:HEARTBEAT:IPADDR2 params ip=192.168.122.20 cidr_netmask=24 opmonitor interval= 30scrm (Live) configure# Commitcrm (live) configure# shownode node1.comnode node4.comprimitive VIP OCF:HEARTBEAT:IPADDR2 params ip= "192.168.122.20" cidr_netmask= "OP monitor interval=" 30s "Property $id =" Cib-bootstrap-options "dc-version = "1.1.10-14.el6-368c726" cluster-infrastructure= "Classicopenais (with plugin)" expected-quorum-votes= "2" Stonith-enabled= "false"


when more than half of the nodes are online , the cluster thinks it has a quorum and is "legit"

CRM (Live) configure# property No-quorum-policy=ignore

to continue running a resource without a quorum

after a successful commit, you can use the command Crm_mon on another node to view the resource

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/45/D6/wKioL1PsQJ7g1q8dAAEjE1CsBfc857.jpg "title=" Screenshot.png "alt=" Wkiol1psqj7g1q8daaeje1csbfc857.jpg "/>


Add Web Service resource website:

first Modify the httpd configuration file

VI/ETC/HTTPD/CONF/HTTPD.CONF//Node service nodes are modified, about 921 lines, minus comments <Location/server-status> SetHandler Server-status Order Deny,allow deny from all to 127.0.0.1</location>
CRM (live) configure# primitive website Ocf:heartbeat:apache params configfile=/etc/httpd/conf/httpd.conf opmonitor Interval=30scrm (Live) configure# commitWARNING:website:default Timeout 20s for start is smaller than the advised 40sWARNI NG:website:default Timeout 20s for stop is smaller than the advised 60s
CRM (Live) configure# colocation website-with-vip inf:website VIP

bind the website and VIP on the same node, without adding, two resources will run on different nodes

crm (live) Configure# location master-node website 10:  node4.com   //Set master master node CRM (live) Configure# commit 
CRM (Live) Configure# shownode node1.comnode node4.comprimitive vip ocf:heartbeat:i paddr2     params ip= "192.168.122.20"  cidr_netmask= "   "   op monitor interval= "30s" primitive website ocf:heartbeat:apache      params configfile= "/etc/httpd/conf/httpd.conf"      op  Monitor interval= "30s"      meta target-role= "Started" location  master-node website 10: node4.comcolocation website-with-vip inf: website  vipproperty  $id = "Cib-bootstrap-options"      dc-version= "1.1.10-14.el6-368c726"      cluster-infrastructure= "classic openais  (with plugin)"       expected-quorum-votes= "2"      stonith-enabled= "false"       no-quorum-policy= "Ignore" 

at this point, you can test: Open the monitoring Crm_mon on the standby node , and then close the Corosync service of the master node , you can see that the resource will switch to the standby node, and if master recovers, it will switch back. That is, the cluster is highly available



Add iSCSI shared storage

First establish shared storage on the host

Vi/etc/tgt/targets.conf<targetiqn.2014-06.com.example:server.target1>backing-store/dev/vg_ty/ty//host Use LVM Storage initiator-address 192.168.122.24initiator-address 192.168.122.27</target>


$/ETC/INIT.D/TGTD Restart

$ tgt-admin-s// can view all target hosts

Type: disk            scsi id: iet      00010001             scsi sn: beaf11            size:  2147 mb, block size: 512             Online: Yes            Removable  Media: no            prevent removal:  No            Readonly: No             Backing store type: rdwr             backing store path: /dev/vg_ ty/ty            backing store flags:      Account information:    ACL information:         192.168.122.24        192.168.122.27

connect iSCSI on two nodes (virtual machines)

[Email protected] ~]# iscsiadm-m discovery-p 192.168.122.1-t ststarting iscsid: [OK]192.168.1 22.1:3260,1 Iqn.2014-06.com.example:server.target1[[email protected] ~]# iscsiadm-m node-llogging in to [Iface:default , Target:iqn.2014-06.com.example:server.target1, portal:192.168.122.1,3260] (multiple) Login to [Iface:default, Target:iqn.2014-06.com.example:server.target1, portal:192.168.122.1,3260]successful.


Fdisk-l// You can see the newly added disks and partition them .


Fdisk-cu/dev/sda

MKFS.EXT4/DEV/SDA1// formatted into EXT4

Mount/dev/sda1/var/www/html

Echo ' node1.com ' >/var/www/html/index.html

Umount/dev/sda1


CRM (Live) configure# primitive webfs ocf:heartbeat:Filesystem params device=/dev/sda1directory=/var/www/html fstype= EXT4 op monitor Interval=30scrm (live) configure# colocation webfs-with-website INF:WEBFS website//Will W EBFS and website bind on the same node CRM (live) configure# order WEBSITE-AFTER-WEBFS INF:WEBFS website//Set boot order, start file system first W EBFS, and then start the service website

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/45/D8/wKioL1PsY0PxoUePAAMIk0lnUUA157.jpg "title=" Screenshot-3.png "alt=" Wkiol1psy0pxouepaamik0lnuua157.jpg "/>

View resource status at this time

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/45/D8/wKioL1PsZA6DwMmyAAFcsi8v1I8652.jpg "title=" Screenshot-4.png "alt=" Wkiol1psza6dwmmyaafcsi8v1i8652.jpg "/>

The service runs on master node4.com, and when Master goes down, the resource is taken over by the standby node node1.com.


This article is from the "Dance Fish" blog, please be sure to keep this source http://ty1992.blog.51cto.com/7098269/1539962

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.