High Availability Cluster Experiment four: Drbd+corosync+pacemaker

Source: Internet
Author: User
Tags echo b

650) this.width=650; "Title=" is not named 1. JPG "alt=" wkiom1ygs9rzw_j7aabun7pwznq024.jpg "src=" http://s3.51cto.com/wyfs02/M02/74/8E/wKiom1Ygs9rzW_ J7aabun7pwznq024.jpg "/>

On the basis of the previous article, add DRBD on both servers and start with Corosync+pacemaker:

First of all: the operating system used in the experiment is CentOS 6.4 (kernel version 2.6.32-358.e16.x86_64), and does not come with the DRBD kernel module (2.6.33 started only comes with), you need to add. The RPM package found on the internet has a problem with the identification of CentOS 6.4, has been prompted to need 2.6.32-358.e16.x86_64 kernel (in fact itself this version), there are other people on the internet said that there is a problem in the CentOS6.4. So the following example is not installed through the RPM package, but directly from the official website to download the source code compiled installation.

On both servers

1. Add a new disk/dev/sdb.

2. Create a new partition SDB1:

Fdisk/dev/sdb

Partprobe/dev/sdb1

3. Install the kernel development-related toolkit:

Yum-y Install Kernel-devel kernel-headers Flex gcc

4. Unzip the source code, compile and add it as a service:

Tar zxf drbd-8.4.3.tar.gz

CD drbd-8.4.3

./configure--with-km

Make kdir=/usr/src/kernels/2.6.32-358.el6.x86_64

Make install

Mkdir-p/USR/LOCAL/VAR/RUN/DRBD # must add this directory

cp/usr/local/etc/rc.d/init.d/drbd/etc/rc.d/init.d/

#与rpm安装不同的是, the corresponding configuration file path after the source installation is/usr/local/etc/...

Chkconfig--add DRBD

Chkconfig DRBD on

CD DRBD

Make clean

Make kdir=/usr/src/kernels/2.6.32-358.el6.x86_64

CP drbd.ko/lib/modules/' uname-r '/kernel/lib/

Modprobe DRBD

6. See if the module has been added:

Lsmod | grep DRBD

On one of the platforms
1. Add the Modified file:
Vim/usr/local/etc/drbd.d/global-common.conf
Global {
#usage-count No;
# Minor-count Dialog-refresh disable-ip-verification
}


Common {#提供默认属性
Protocol C;


Handlers {
Pri-on-incon-degr "/USR/LIB/DRBD/NOTIFY-PRI-ON-INCON-DEGR.SH; /usr/lib/drbd/notify-emergency-reboot.sh; echo B >/proc/sysrq-trigger; Reboot-f ";
PRI-LOST-AFTER-SB "/USR/LIB/DRBD/NOTIFY-PRI-LOST-AFTER-SB.SH; /usr/lib/drbd/notify-emergency-reboot.sh; echo B >/proc/sysrq-trigger; Reboot-f ";
Local-io-error "/USR/LIB/DRBD/NOTIFY-IO-ERROR.SH; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o >/proc/sysrq-trigger; Halt-f ";
# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh-p---c 16k";
# after-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}


startup {
#wfc-timeout 120;
#degr-wfc-timeout 120;
}


Disk {
On-io-error detach;
#fencing resource-only;
}


NET {
Cram-hmac-alg "SHA1";
Shared-secret "Thisissecret";
}
Syncer {
Rate 500M;
}
}


2. Add a resource file:
Vim/usr/local/etc/drbd.d/drbdweb.res
Resource Drbdweb {
device/dev/drbd0;
DISK/DEV/SDB1;
Meta-disk internal;
On node1.test.net {
Address 192.168.1.2:7789;
}
On node2.test.net {
Address 192.168.1.3:7789;
}
}
3. Copy the configuration file to another table:
Scp-r/usr/local/etc/drbd.* 192.169.1.3:/usr/local/etc/


Initialize the resources on both servers and start the service, and view the status:
Drbdadm CREATE-MD Drbdweb
Service DRBD Start
Cat/proc/drbd
Drbd-overview


On one of the platforms:
1. Set into primary:
DRBDADM----Overwrite-data-of-peer primary drbdweb
2. Format and Mount
Mkfs.ext4/dev/drbd0
Mkdir/drbd
Mount/dev/drbd0/drbd
3. Create a file, uninstall it, and set it to secondary
Touch/drbd/something
Umount/drbd
Drbdadm Secondary Drbdweb


Upgrade to primary on another platform to mount and see if there are any files that you just created:
DRBDADM Primary Drbdweb
Drbd-overview
Mkdir/drbd
Mount/dev/drbd0/drbd
Ls/drbd


At this point, the DRBD experiment has been completed.


Here by the way: in the following experiment if there is a problem with the operation of the DRBD brain fissure (split-brain), need to manually resolve to continue to use, the method is as follows:
1. Select one of the nodes to discard:
Drbdadm Secondary <resource>
Drbdadm Connect--discard-my-data <resource>
2. Another node:
Drbdadm Connect <resource>
For more details, refer to the official documentation: http://drbd.linbit.com/users-guide/s-resolve-split-brain.html



Here is the corosync+pacemaker to take over DRBD:


On both servers
1. Uninstall and Service stop:
Umount/mnt/drbd
Service DRBD Stop
Chkconfig DRBD off
2. Change the configuration file:
Vim/etc/corosync/corosync.conf

650) this.width=650; "title=" 1.png "alt=" wkiol1ygs5tqttttaadv2ms62gi134.jpg "src=" http://s3.51cto.com/wyfs02/M00/ 74/8a/wkiol1ygs5tqttttaadv2ms62gi134.jpg "/>

650) this.width=650; "title=" 2.png "alt=" wkiol1ygsveipnruaabtza4pslm874.jpg "src=" http://s3.51cto.com/wyfs02/M01/ 74/8a/wkiol1ygsveipnruaabtza4pslm874.jpg "/>

In effect, add the following on the configuration file for the previous experiment:
# define COROSNYC working user, need to use root user
aisexec {
User:root
Group:root
}
3. Start the service
Service Corosync Start


On one of the platforms:
1. View resource Agent
    crm RA list ocf linbit
    crm ra info OCF: LINBIT:DRBD
2. Add Resources and Constraints:
    crm
    configure Primitive Resdrbdweb OCF:LINBIT:DRBD params drbd_resource=drbdweb op start timeout=240s op stop timeout=100s
     configure Ms Ms_bdrbdweb resdrbdweb Meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1
& Nbsp;   configure Primitive Drbdfs ocf:heartbeat:Filesystem params device= "/dev/drbd0" directory= "/ DRBD "fstype=" Ext4 "Op start timeout=60s op stop timeout=60s
    configure colocation drbdfs_and _ms_drbdweb Inf:drbdfs ms_drbdweb:master
    configure order Ms_drbdweb_before_drbdfs Mandatory:ms_drbdweb:promote Drbdfs:start
3. The final result is as follows:
    configure show

650) this.width=650; "title=" 3.png "alt=" wkiom1ygszlgjlaaaalxj2tlimy467.jpg "src=" http://s3.51cto.com/wyfs02/M00/ 74/8e/wkiom1ygszlgjlaaaalxj2tlimy467.jpg "/>

in the reshttpd and the Resip is the configuration of the last experiment, which can be ignored

Self-switching test:

1. View Status:

CRM Status

2. Create the file on the master node and switch to standby:

Touch/drbd/another

CRM node Standby

3. Check the status switch and file sync on another stage:

CRM Status

ls/drbd/

4. re-launch the server that you just set as standby :

CRM node Online


This article is from the "Xin-Lu-li-cheng" blog, please make sure to keep this source http://orzorz.blog.51cto.com/4228156/1703559

High Availability Cluster Experiment four: Drbd+corosync+pacemaker

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.