Leveraging pacemaker cluster Management to maximize Apache availability

Source: Internet
Author: User
Tags ssh centos

Experimental environment:

System version: CentOS release 6.5 (Final) _x64

node1:ip:192.168.0.233 #写进/etc/hosts File

node2:ip:192.168.0.234

vip:192.168.0.183

Note: 1. Two machines must write static IP, remember not to use DHCP to obtain IP, to ensure that two machines can ping each other

2. Disable firewalls and SELinux first

I. Configuring SSH

SSH is a convenient and secure tool for remotely transferring files or running commands. In this document, we create the SSH key (with the-n option) to remove the hassle of entering a password for login.

Create a key and allow all users with this key to log in

To operate on Node1:

[Root@node1 ~]# ssh-keygen-t dsa-f ~/.ssh/id_dsa-n ""

[Root@node1 ~]# CP. Ssh/id_dsa.pub. Ssh/authorized_keys

[Root@node1 ~]# scp-r. SSH node2:

OK, we've done our dual-machine certification.

Two. Cluster Software Installation/configuration (two machines have to do the same operation)

1. Here, we use the RPM package installed, we recommend the use of 163 source or Epel source for dependency resolution:

Epel-release-6-5.noarch #我用的是epel源, if you can't find the source RPM package, you can leave a message for me.

RPM-IVH epel-release-6-5.noarch.rpm #安装

163 Source:

wget Http://mirrors.163.com/.help/CentOS6-Base-163.repo

2. Set Pacemaker Source:

Vi/etc/yum.repos.d/centos.repo #贴入如下内容

[Centos-6-base]

name=centos-$releasever-base

mirrorlist=http://mirrorlist.centos.org/?release= $releasever &arch= $basearch &repo=os

Enabled=1

#baseurl =http://mirror.centos.org/centos/$releasever/os/$basearch/

3. Install cluster Software:

[Root@node1 ~]# Yum Makecache #缓存一下

[Root@node1 ~]# yum install lvm2-cluster corosync Pacemaker #安装请这样安装, the official website document is a bit of a pit, about 86 packs

4. Start Cluster Setup:

[Root@node1 ~]#/ETC/INIT.D/PCSD start

[Root@node2 ~]#/ETC/INIT.D/PCSD start

① set the Hacluster user to set the password. Two nodes set consistently

[Root@node1 ~]# passwd Hacluster

[Root@node2 ~]# passwd Hacluster

② Configuration Corosync

Hacluster this we used before the user is to do a unified cluster management use, where the configuration Corosync is used to this user, configuration completed after the corosync.conf sent to each node Hacluster in a cluster node authentication of the matter.

[Root@node1 ~]# PCs cluster auth node1 node2

Username:hacluster

Password: #这里输入redhat

Node1:authorized

Node2:authorized

[Root@node1 ~]# pcs cluster setup--name mycluster node1 Node2

Node1:updated cluster.conf ...

Node2:updated cluster.conf ...

③ to start our cluster:

[Root@node1 ~]# pcs cluster start--all #启动稍微有点慢, waiting patiently

④ View the cluster we created, node status:

[Root@node1 ~]# PCs status

[Root@node1 ~]# pcs status Corosync

[Root@node1 ~]# crm_verify-l-v #查看你会发现很多错误

[Root@node1 ~]# PCs property set Stonith-enabled=false #关掉这些错误

Three. Start configuring our Apache High Availability (master/Standby)

①. Add VIP

The first thing to do is to configure an IP address, regardless of where the cluster service is running, we need a fixed address to provide the service. Here I choose 192.168.0.183 as a floating IP, give it a good name Clusterip and tell the cluster to check it every 30 seconds.

[Root@node1 ~]# PCs resource create Clusterip OCF:HEARTBEAT:IPADDR2 ip=192.168.0.183 cidr_netmask=32 op Monitor interval= 505

Another important message is OCF:HEARTBEAT:IPADDR2. This tells pacemaker three things, the first part OCF, indicating the standard (type) used for this resource and where to find it. The second section identifies the namespace of the resource script in the OCF, in this case the heartbeat. The last section indicates the name of the resource script.

[Root@node1 ~]# pcs status #查看我们已经创建好的资源, a VIP is now running on Node1

② installation Apace

Before continuing, we need to make sure that Apache is installed on all two nodes.

[Root@node1 ~]# yum-y Install httpd

[Root@node2 ~]# yum-y Install httpd

Ready to work, to Apache, write a default home page:

[Root@node1 ~]# cat <<-end >/var/www/html/index.html

Welcome to Node 1

End

[Root@node2 ~]# cat <<-end >/var/www/html/index.html

Welcome to Node 2

End

③ to open the Apache status URL

To monitor the health status of the Apache instance and restore the Apache service when it hangs, the resource agent used by pacemaker assumes that the Server-status URL is available. View the/etc/httpd/conf/httpd.conf and make sure that the following options are not disabled or commented out. , where we add it directly, the default is annotated.

Note that Apache for two node is added, do not be lazy

Vi/etc/httpd/conf/httpd.conf

SetHandler Server-status

Order Deny,allow

Deny from all

Allow from 127.0.0.1

⑤apache added to our cluster

[root@node1~] #pcs resource Create Web ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf statusurl= "http:// Localhost/server-status "OP Monitor interval=1min

Now, Apache can already be added to the cluster. We call this resource the Web. We need to use a OCF script called Apache, which in the namespace of heartbeat, the only parameter that needs to be set is the Apache main configuration file path, and we tell the cluster to detect whether Apache is running every minute.

By default, all resource Start,stop and monitor operations have a timeout of 20 seconds. In many cases this time-out period is less than the recommended timeout period. In this tutorial, we will adjust the timeout time for global operations to 240 seconds.

[Root@node1 ~]# PCs resource op defaults timeout=240s

[Root@node1 ~]# PCs resource op defaults

timeout:240s

⑥ ensure that resources are running at the same node

(1) In order to reduce the load of each machine, pacemaker will intelligently try to spread resources over each node. However, we can tell the cluster that two resources are connected and run at the same node (or different nodes run). Here we tell the cluster website to run only on nodes with Clusterip.

(2) For this reason we use managed constraints to make it mandatory to indicate that webs and Clusterip run on the same node. The managed constraint in the "Mandatory" section is represented by a fractional infinity (infinity). Infinity also indicates that the Web cannot run if Clusterip is not running on any node.

[Root@node1 ~]# pcs constraint colocation add Web clusterip INFINITY

Start Apache on two nodes and access our written interface with VIP.

⑦ control the start stop order of resources

When Apache is started, it is tied to the available IP. It doesn't know the IP we added later, so we need to not only control them running on the same node, but also make sure Clusterip is started before the web. We use the Add ordering constraint to achieve this effect. We need to give the order a name (Apache-after-ip, and so on) and point out that he is managed (so that when the Clusterip is restored and the web is restored), the starting sequence of the two resources is stated.

[Root@node1 ~]# pcs constraint order Clusterip then Web

[Root@node1 ~]# pcs constraint #查看一下我们写的规则

⑧. Assign priority to Location

(1) Pacemaker does not require that your machine's hardware configuration is the same, and that some machines may be better equipped than others. In this situation we would like to set the rules that the resource will run on when a node is available. To achieve this effect we create location constraints.

(2) The same, we gave him a descriptive name (PREFER-PCMK-1), indicating that we wanted to run website on top of the service and wanted to run on it (we now specify a score of 50, but in a two-node cluster state, any value greater than 0 can achieve the desired result), and the name of the target node:

[Root@node1 ~]# pcs constraint #查看我们写好的规则

If you want to see the current score, you can use crm_simulate this command

[Root@node1 ~]# CRM_SIMULATE-SL

⑨ manually migrate resources in the cluster

[Root@node1 ~]# pcs constraint location Web prefers node2=infinity

Pacemaker There are many powerful features have not yet been excavated, request to expect!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.