Corosyc + pcmake + drbd dual-web high availability solution

Source: Internet
Author: User
Tags echo b

For websites that cannot be interrupted but have low access traffic, configuring two servers as one master and one slave is a commonly used method.

This article is based on corosync + pacemaker + drbd to build a complete dual-LAMP high-availability cluster system


Create Basic Environment

First, check whether DRBD is supported by the kernel. My system is centos6.2. The default kernel version 6.2 Is 2.6.32-220. el6.i686. DRBD8.3 requires at least 2.6.33 Kernel support. Therefore, Kernel patches must be installed. I have a problem installing the kernel patch in centos6.2. There is no problem in the centos5.x series, so I actually compiled and installed it)


List of supported drbd versions

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z44B006-0.jpg "style =" float: none; "title =" 1.jpg"/>

Sohu and 163 source do not have this 8.2 version downloaded from centos official source

Http://mirror.centos.org/centos/5/extras/i386/


Configure the hostname of both parties, write it to the hosts file, and establish ssh password-less communication between the two parties

The specific steps are not listed here

Hosts file

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z4461463-1.jpg "title =" 2.jpg" style = "float: none;"/>


Step 1: DRBD

Create the disk partition to be synchronized

View Disk Partitions

[Root @ ha1 drbd. d] # fdisk/dev/sdb-l

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z4463119-2.jpg "title =" 3.jpg" style = "float: none;"/>


The DRBD installation package is divided into the kernel part and the user space management part. Both packages need to be installed

[Root @ ha1 drbd] # yum localinstall kmod-drbd83-8.3.8-1.el5.centos.i686.rpm drbd83-8.3.8-1.el5.centos.i386.rpm


I actually install it by compiling. My configuration file is under/usr/local/etc. The path of the configuration file below takes the rpm package installation as an example. Centos6.2 system does not know why kernel patch installation is a problem)


Configure based on the sample configuration file

[Root @ ha1 drbd] # cp/usr/share/doc/drbd83-8.3.8/drbd. conf/etc/

Edit global configuration

[Root @ ha1 drbd] # cd/etc/drbd. d/

[Root @ ha1 drbd. d] # vi global_common.conf

Global {

Usage-count no;

}

Common {

Protocol C;

Handlers {

Pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo B>/proc/sysrq-trigger; reboot-f ";

Pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo B>/proc/sysrq-trigger; reboot-f ";

Local-io-error "/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh; echo o>/proc/sysrq-trigger; halt-f ";

Fence-peer "/usr/lib/drbd/crm-fence-peer.sh ";

Split-brain "/usr/lib/drbd/notify-split-brain.sh root ";

Out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root ";

Before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh-p 15 ---c 16 k ";

After-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;

}

Startup {

Wfc-timeout 120;

Degr-wfc-timeout 120;

}

Disk {

On-io-error detach;

Fencing resource-only;

}

Net {

Cram-hmac-alg "sha1 ";

Shared-secret "lustlost ";

}

Syncer {

Rate 100 M;

}

}

Define Resource Configuration

[Root @ ha1 drbd. d] # vi web. res

Resource web {

On ha1.lustlost.com {

Device/dev/drbd0;

Disk/dev/sdb1;

Address 172.16.93.148: 7789;

Meta-disk internal;

}

On ha2.lustlost.com {

Device/dev/drbd0;

Disk/dev/sdb1;

Address 172.16.93.149: 7789;

Meta-disk internal;

}

}

Copy the configuration file cp to ha2

[Root @ ha1 drbd. d] # scp/etc/drbd. conf ha2:/etc

[Root @ ha1 drbd. d] # scp/etc/drbd. d/* ha2:/etc/drbd. d/

Initialize Disk

Run on ha1 and ha2

Drbdadm create-md web

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z44C502-3.jpg "title =" 4.jpg" style = "float: none;"/>

Start the service on both nodes

[Root @ ha2 drbd. d] # service drbd status

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z4461L2-4.jpg "title =" 5.jpg" style = "float: none;"/>


After the startup, set one of the nodes as the master node, execute the following command, and the data will be automatically synchronized

[Root @ ha1 drbd. d] # drbdadm -- overwrite-data-of-peer primary web

View synchronization status

[Root @ ha1 drbd. d] # drbd-overview

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z44625A-5.jpg "title =" 6.jpg" style = "float: none;"/>


After the synchronization is complete, check the status on the master node.

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z4462225-6.jpg "title =" 7.jpg" style = "float: none;"/>


View the status of the slave Node

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z44A943-7.jpg "title =" 8.jpg" style = "float: none;"/>


Format a drbd Device

[Root @ ha1 drbd. d] # mkfs. ext3/dev/drbd0

Mount to directory

[Root @ ha1 drbd. d] # mount/dev/drbd0/web/


Step 2: LAMP Platform

Yum is directly installed for convenience.

[Root @ ha1 ~] # Yum install httpd mysql-server php-mysql-y

Edit apache configuration. The directory points to the directory of the drbd mount point.

[Root @ ha1 ~] # Vi/etc/httpd/conf/httpd. conf

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z4463C3-8.jpg "title =" 10.jpg" style = "float: none;"/>

Edit the mysql configuration. The data directory also points to the drbd mount point.

[Root @ ha1 ~] # Vi/etc/my. cnf

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z4464P9-9.jpg "title =" 11.jpg" style = "float: none;"/>


Drbd, Mysql, and httpd are configured as cluster services. If mysql and php are configured to be disabled after startup, the cluster service control is enabled and disabled.

[Root @ ha1 data] # chkconfig mysqld off

[Root @ ha1 data] # chkconfig httpd off

[Root @ ha1 data] # chkconfig drbd off

Configure the ha2 node as well.


Step 3: Corosync + pacemaker

Install corosync and pacemaker

[Root @ ha1 ~] # Yum install corosync pacemaker-y

Edit corosync global configuration

[Root @ ha1 ~] # Cd/etc/corosync/

[Root @ ha1 corosync] # vi corosync. conf

The configuration is as follows:

# Please read the corosync. conf.5 manual page

Compatibility: whitetank

Totem {

Version: 2

Secauth: off

Threads: 0

Interface {

Ringnumber: 0

Bindnetaddr: 172.16.93.148

Mcastaddr: 226.94.1.1

Mcastport: 5405

Ttl: 1

}

}

Logging {

Fileline: off

To_stderr: no

To_logfile: yes

To_syslog: yes

Logfile:/var/log/cluster/corosync. log

Debug: off

Timestamp: on

Logger_subsys {

Subsys: AMF

Debug: off

}

}

Amf {

Mode: disabled

}

Service {

Ver: 0

Name: pacemaker

}

Aisexec {

User: root

Group: root

}


Generate a key file

[Root @ ha2 corosync] # corosync-keygen

The system load pool may not have enough random numbers for a newly installed system or a system with short boot duration. You cannot use this command to generate a key file, you can use the find/command to make the system perform random I/O operations.

Copy the configuration file to ha2

[Root @ ha1 corosync] # scp-p corosync. conf authkey ha2:/etc/corosync/

Then go to the crm configuration page and say crm is really easy to use. All command options can be automatically completed, just like configuring an exchange route)

[Root @ ha1 corosync] # crm

Crm (live) # configure

Configure three global attributes

Crm (live) configure # property no-quorum-policy = ignore because it is a dual-node, the number of votes is ignored.

Crm (live) configure # property stonith-enabled = false disable the stonith Device

Crm (live) configure # rsc_defaults resource-stickiness = 100 improve resource stickiness and prevent resource transfer during server fault recovery

Plan the resources to be configured.

In dual-LAMP nodes, the following resources are available:

IP address, mysql, httpd, drbd, mount file system. All these resources must be on the same node. Therefore, a group is defined first. In addition, the sequence of starting these five resources must be clarified first.

First, define these resources.

Cluster virtual IP Address

Crm (live) configure # primitive webip ocf: heartbeat: IPaddr params ip = 172.16.93.150 broadcast = 255.255.255.0

Database

Crm (live) configure # primitive webmysql lsb: mysqld

Web Server

Crm (live) configure # primitive webserver lsb: httpd

Drbd Device

Crm (live) configure # primitive webdrbd ocf: heartbeat: drbd params drbd_resource = web

Drbd master-slave Definition

Crm (live) configure # MS ms_drbd webdrbd meta master-mak = "1" master-node-max = "1" clone-max = "2" clone-node-max = "1" notify = "true" mount Definition

Crm (live) configure # primitive webfs ocf: heartbeat: Filesystem params device =/dev/drbd0 directory =/web fstype = ext3 op start timeout = 60 s op stop timeout = 60 s

Specify constraints to constrain all resources together

Crm (live) configure # colocation myweb inf: webip webserver webmysql webfs ms_drbd: Master

Define order

Crm (live) configure # order webfs_after_drbd inf: ms_drbd: promote webfs: start # mount the file system only when drbd is promoted to the primary node

Crm (live) configure # order webmysql_after_webfs inf: webfs: start webmysql: start # mysql can be started only after the file system is mounted

Crm (live) configure # order webserver_after_webip inf: webip: start webserver: start # apache is started only after the IP address is configured.

Crm (live) configure # order webserver_after_webfs inf: webfs: start webserver: start

# Apache can be started only after the file system is mounted

View all the defined resources

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z44A619-10.jpg "title =" 12.jpg" style = "float: none;"/>


View resource startup status

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/1Z44613A-11.png "title =" 13.png" style = "float: none;"/>

By disabling the ha1 Nic, all resources can be successfully transferred to ha2.

This article from the "lustlost-lost in desire" blog, please be sure to keep this source http://lustlost.blog.51cto.com/2600869/1177524

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.