Build a high-availability MySQL Cluster Based on Corosync + DRBD

Source: Internet
Author: User
Tags echo b flushes hmac

1. experiment environment:

Node1: 192.168.1.17RHEL5.8 _ 32bit, web server)

Node2: 192.168.1.18RHEL5.8 _ 32bit, web server)

SteppingStone: 192.168.1.19RHEL5.8 _ 32bit)

VIP: 192.168.1.20


2. Preparations

<1> Configure the Host Name

The node name is parsed using/etc/hosts. The node name must be consistent with the execution result of the uname-n command.

Node1:

# hostname node1.ikki.com# vim /etc/sysconfig/networkHOSTNAME=node1.ikki.com

Node2:

# hostname node1.ikki.com# vim /etc/sysconfig/networkHOSTNAME=node2.ikki.com

<2> Configure nodes for ssh key-based communication

Node1:

# ssh-keygen -t rsa# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2

Node2:

# ssh-keygen -t rsa# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1

<3> Configure Communication Between Nodes Based on host names

Node1 & Node2:

# vim /etc/hosts192.168.1.17   node1.ikki.com node1192.168.1.18   node2.ikki.com node2

<4> Configure Time Synchronization for each node

Node1 & Node2:

# crontab -e*/5 * * * *     /sbin/ntpdate 202.120.2.101 &> /dev/null

<5> Configure SteppingStone)

Establish ssh mutual trust with Node1 and Node2 and communicate with each other based on the host name:

# ssh-keygen -t rsa# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2# vim /etc/hosts192.168.1.17   node1.ikki.com node1192.168.1.18   node2.ikki.com node2

Create a step script tool for synchronizing remote command execution:

# vim step#!/bin/bashif [ $# -eq 1 ]; then  for I in {1..2}; do    ssh node$I $1;  doneelse  echo "Usage:step 'COMMANDs'"fi# chmod +x step# mv step /usr/sbin

<6> each node of Node1 and Node2 provides a shard of the same size as the drbd device.

Create an LVM logical volume of 1 GB for each node

# fdisk /dev/sdan --> e --> n --> +1G --> w# partprobe /dev/sda


3. Install kernel modules and management tools

Install the latest version 8.3:

Drbd83-8.3.15-2.el5.centos.i386.rpm

Kmod-drbd83-8.3.15-3.el5.centos.i686.rpm

Perform Remote Installation on SteppingStone:

# step 'yum -y --nogpgcheck localinstall drbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm'


4. Configure drbdNode1)

<1> copy the sample file as the configuration file:

# cp /usr/share/doc/drbd83-8.3.8/drbd.conf  /etc

<2> Configuration/etc/drbd. d/global-common.conf

Global {usage-count no; # forbidden Information Statistics # minor-count dialog-refresh disable-ip-verification} common {protocol C; # handlers {# These are EXAMPLE handlers only. # They may have severe implications, # like hard resetting the node under certain circumstances. # Be careful when chosing your poison. pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo B>/proc/sysrq-trigger; reboot-f "; pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo B>/proc/sysrq-trigger; reboot-f "; local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh; echo o>/proc/sysrq-trigger; halt-f "; # fence-peer"/usr/lib/drbd/crm-fence-peer.sh "; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root "; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh-p 15 ---c 16 k "; # after-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh ;} startup {# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb} disk {on-io-error detach; # execute separation when disk I/o errors occur # on-IO-error fencing use-bmbv no-disk-barrier no-disk-flushes # no-disk-drain no-md-flushes max -bio-bvecs} net {# sndbuf-size rcvbuf-size timeout connect-int ping-timeout max-buffers # max-epoch-size ko-count allow-two- primaries cram-hmac-alg shared-secret # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg no-tcp-corki cram-hmac-alg "sha1 "; # shared-secret "mydrbd7788"; # shared Password} syncer {rate 200 M; # synchronization rate # rate after al-extents use-rle cpu-mask verify-alg csums-alg }}

<3> define a resource/etc/drbd. d/mydrbd. res. The content is as follows:

resource mydrbd {        device  /dev/drbd0;        disk    /dev/sda5;        meta-disk internal;        on node1.ikki.com {                address 192.168.1.17:7789;        }        on node2.ikki.com {                address 192.168.1.18:7789;        }}

Synchronize all the above configuration files to another node

# scp -r /etc/drbd.*  node2:/etc


5. initialize the defined resources on the two nodes and start the service:

<1> initialize resources Node1 and Node2 ):

# drbdadm create-md web

<2> Start Service Node1 and Node2 ):

# /etc/init.d/drbd start

<3> View the startup status Node1 ):

# cat /proc/drbdversion: 8.3.15 (api:88/proto:86-97)GIT-hash: 0ce4d235fc02b5c53c1c52c53433d11a694eab8c build by mockbuild@builder17.centos.org, 2013-03-27 16:04:08 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:987896

<4> set the current node as the master node node1)

# drbdadm -- --overwrite-data-of-peer primary mydrbd

Note: It is applicable to initial settings.

View the status again:

# drbd-overview  0:mydrbd  Connected Primary/Secondary UpToDate/UpToDate C r-----

Note: Primary/Secondary: current node/another node


6. Create a file system and mount the master node node1)

The file system can only be mounted on the Primary node. Therefore, you can format the drbd device on the master node:

# mke2fs -j /dev/drbd0# mkdir /mydata# mount /dev/drbd0 /mydata


7. Switch the master node to test

Node1:

# cp /etc/inittab /mydata# umount /mydata# drbdadm secondary mydrbd# drbd-overview  0:mydrbd  Connected Secondary/Secondary UpToDate/UpToDate C r-----

Node2:

# drbdadm primary mydrbd# drbd-overview  0:mydrbd  Connected Primary/Secondary UpToDate/UpToDate C r-----# mkdir /mydata# mount /dev/drbd0 /mydata# ls /mydata


8. Configure openais/corosync + pacemaker

<1> install corosync and pacemakerSteppingStone)

# cd /root/corosync/# lscluster-glue-1.0.6-1.6.el5.i386.rpmcluster-glue-libs-1.0.6-1.6.el5.i386.rpmcorosync-1.2.7-1.1.el5.i386.rpmcorosynclib-1.2.7-1.1.el5.i386.rpmheartbeat-3.0.3-2.3.el5.i386.rpmheartbeat-libs-3.0.3-2.3.el5.i386.rpmlibesmtp-1.0.4-5.el5.i386.rpmpacemaker-1.1.5-1.1.el5.i386.rpmpacemaker-libs-1.1.5-1.1.el5.i386.rpmresource-agents-1.0.4-1.1.el5.i386.rpm# step 'mkdir /root/corosync'# for I in {1..2};do scp *.rpm node$I:/root/corosync;done# step 'yum -y --nogpgcheck localinstall /root/corosync/*.rpm'# step 'mkdir /var/log/cluster'

<2> modify corosync configuration and authenticate node1)

# Cd/etc/corosync/# cp corosync. conf. example corosync. conf # vim corosync. conf # modify the following content: secauth: onthreads: 2 bindnetaddr: 192.168.1.0to _ syslog: no # vim corosync. conf # Add the following content: service {ver: 0 name: pacemaker} aisexec {user: root group: root} # corosync-keygen # scp-p authkey corosync. conf node2:/etc/corosync/

<3> start the service and check Node1)

# service corosync start# ssh node2 'service corosync start'# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log# grep  TOTEM  /var/log/cluster/corosync.log# grep pcmk_startup /var/log/cluster/corosync.log

<4> Configure cluster attributes

Disable the stonith device, disable the ticket count policy, and set the default stickiness:

# crm configure property stonith-enabled=false# crm configure property no-quorum-policy=ignore# crm configure rsc_defaults resource-stickiness=100

View cluster configuration:

# crm configure shownode node1.ikki.comnode node2.ikki.comproperty $id="cib-bootstrap-options" \        dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \        cluster-infrastructure="openais" \        expected-quorum-votes="2" \        stonith-enabled="false" \        no-quorum-policy="ignore"rsc_defaults $id="rsc-options" \        resource-stickiness="100


9. Define the configured drbd device/dev/drbd0 as a cluster service.

<1> stop the drbd service and disable self-starting Node1 and Node2)

# service drbd stop# chkconfig drbd off

<2> Configure drbd as the cluster resource node1)

Add the mydrbd resource and set it as the master slave Resource:

# crm configure primitive mysqldrbd ocf:linbit:drbd params drbd_resource=mydrbd op start timeout=240 op stop timeout=100 op monitor role=Master interval=20 timeout=30 op monitor role=Slave interval=30 timeout=30# crm configure ms ms_mysqldrbd mysqldrbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true

Note: high-availability resources cannot be the same as drbd resources. If an error is displayed in crm status, check the configuration and restart the corosync service.

View the running status of the current cluster:

# crm status    ============Last updated: Sat Sep 21 23:27:01 2013Stack: openaisCurrent DC: node1.ikki.com - partition with quorumVersion: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f2 Nodes configured, 2 expected votes1 Resources configured.============Online: [ node2.ikki.com node1.ikki.com ] Master/Slave Set: ms_mysqldrbd [mysqldrbd]     Masters: [ node1.ikki.com ]     Slaves: [ node2.ikki.com ]

<3> Create an automatically mounted cluster service node1)

# crm configure primitive mystore ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/mydata fstype=ext3 op start timeout=60 op stop timeout=60# crm configure colocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Master# crm configure order mystore_after_ms_mysqldrbd mandatory: ms_mysqldrbd:promote mystore:start

View the running status of the resource:

# crm status============Last updated: Sat Sep 21 23:55:01 2013Stack: openaisCurrent DC: node1.ikki.com - partition with quorumVersion: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f2 Nodes configured, 2 expected votes2 Resources configured.============Online: [ node2.ikki.com node1.ikki.com ] Master/Slave Set: ms_mysqldrbd [mysqldrbd]     Masters: [ node1.ikki.com ]     Slaves: [ node2.ikki.com ] mystore        (ocf::heartbeat:Filesystem):    Started node1.ikki.com

<4> test the simulated fault

If node1 is set to standby, the resource is transferred to node2.

# crm node standby# crm status============Last updated: Sat Sep 21 23:59:38 2013Stack: openaisCurrent DC: node1.ikki.com - partition with quorumVersion: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f2 Nodes configured, 2 expected votes2 Resources configured.============Node node1.ikki.com: standbyOnline: [ node2.ikki.com ] Master/Slave Set: ms_mysqldrbd [mysqldrbd]     Masters: [ node2.ikki.com ]     Stopped: [ mysqldrbd:0 ] mystore        (ocf::heartbeat:Filesystem):    Started node2.ikki.com# ls /mydata/inittab  lost+found

Set node1 to online and display node2 as the master node

# crm node online# crm status============Last updated: Sat Sep 21 23:59:59 2013Stack: openaisCurrent DC: node1.ikki.com - partition with quorumVersion: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f2 Nodes configured, 2 expected votes2 Resources configured.============Online: [ node2.ikki.com node1.ikki.com ] Master/Slave Set: ms_mysqldrbd [mysqldrbd]     Masters: [ node2.ikki.com ]     Slaves: [ node1.ikki.com ] mystore        (ocf::heartbeat:Filesystem):    Started node2.ikki.com


10. Configure the high-availability MySQL cluster service

<1> install MySQL service SteppingStone on each node)

Here the mysql-5.5.28 version is installed using generic binary

# for I in {1..2};do scp mysql-5.5.28-linux2.6-i686.tar.gz node$I:/usr/src/;done# step 'tar -xf /usr/src/mysql-5.5.28-linux2.6-i686.tar.gz -C /usr/local'# step 'ln -sv /usr/local/mysql-5.5.28-linux2.6-i686 /usr/local/mysql'# step 'groupadd -g 3306 mysql'      # step 'useradd -u 3306 -g mysql -s /sbin/nologin -M mysql# step 'mkdir /mydata/data'# step 'chown -R mysql.mysql /mydata/data'# step 'chown -R root.mysql /usr/local/mysql/*'# step 'cp /usr/local/mysql/support-files/my-large.cnf /etc/my.cnf'# step 'cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld'# step 'chkconfig --add mysqld'

<2> initialize MySQL on the master node and configure the startup test node2)

# Cd/usr/local/mysql # scripts/mysql_install_db -- user = mysql -- datadir =/mydata/data # vim/etc/my. cnf is added to [mysqld] as follows: datadir =/mydata/data # service mysqld start # service mysqld stop # chkconfig mysqld off

<3> set Node1 as the master node and configure MySQL without re-initializing)

If node2 is set to standby, the resource is transferred to node1.

# crm node standby# crm node online

Configure the MySQL service on node1 and start the test.

# Vim/etc/my. cnf Add the following content under [mysqld]: datadir =/mydata/data # service mysqld start # service mysqld stop # chkconfig mysqld off

<4> Configure primary resources mysqld and vipNode1)

# crm configure primitive mysqld lsb:mysqld# crm configure colocation mysqld_with_mystore inf: mysqld mystore# crm configure order mysqld_after_mystore mandatory: mystore mysqld# crm configure primitive vip ocf:heartbeat:IPaddr params ip=192.168.1.20 nic=eth0 cidr_netmask=24# crm configure colocation vip_with_ms_mysqldrbd inf: ms_mysqldrbd:Master vip

View the running status of the resource:

# crm status============Last updated: Sun Sep 22 13:03:27 2013Stack: openaisCurrent DC: node1.ikki.com - partition with quorumVersion: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f2 Nodes configured, 2 expected votes4 Resources configured.============Online: [ node2.ikki.com node1.ikki.com ] Master/Slave Set: ms_mysqldrbd [mysqldrbd]     Masters: [ node1.ikki.com ]     Slaves: [ node2.ikki.com ] mystore        (ocf::heartbeat:Filesystem):    Started node1.ikki.com mysqld (lsb:mysqld):   Started node1.ikki.com vip    (ocf::heartbeat:IPaddr):        Started node1.ikki.com

View cluster configuration:

# crm configure shownode node1.ikki.com \        attributes standby="off"node node2.ikki.com \        attributes standby="off"primitive mysqld lsb:mysqldprimitive mysqldrbd ocf:linbit:drbd \        params drbd_resource="mydrbd" \        op start interval="0" timeout="240" \        op stop interval="0" timeout="100" \        op monitor interval="20" role="Master" timeout="30" \        op monitor interval="30" role="Slave" timeout="30"primitive mystore ocf:heartbeat:Filesystem \        params device="/dev/drbd0" directory="/mydata" fstype="ext3" \        op start interval="0" timeout="60" \        op stop interval="0" timeout="60"primitive vip ocf:heartbeat:IPaddr \        params ip="192.168.1.20" nic="eth0" cidr_netmask="24"ms ms_mysqldrbd mysqldrbd \        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"colocation mysqld_with_mystore inf: mysqld mystorecolocation mystore_with_ms_mysqldrbd inf: mystore ms_mysqldrbd:Mastercolocation vip_with_ms_mysqldrbd inf: ms_mysqldrbd:Master viporder mysqld_after_mystore inf: mystore mysqldorder mystore_after_ms_mysqldrbd inf: ms_mysqldrbd:promote mystore:startproperty $id="cib-bootstrap-options" \        dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \        cluster-infrastructure="openais" \        expected-quorum-votes="2" \        stonith-enabled="false" \        no-quorum-policy="ignore"rsc_defaults $id="rsc-options" \        resource-stickiness="100"


11. Test the simulated fault

Configure MysSQL remote access account Node1 on the master node)

# /usr/local/mysql/bin/mysqlmysql> grant all on *.* to root@'%' identified by 'ikki';mysql> flush privileges;

Access SteppingStone remotely on the springboard)

# mysql -uroot -h192.168.1.20 -p

Set node1 to standby and check the cluster status node1)

# crm node standby# crm status============Last updated: Sun Sep 22 13:47:00 2013Stack: openaisCurrent DC: node1.ikki.com - partition with quorumVersion: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f2 Nodes configured, 2 expected votes4 Resources configured.============Node node1.ikki.com: standbyOnline: [ node2.ikki.com ] Master/Slave Set: ms_mysqldrbd [mysqldrbd]     Masters: [ node2.ikki.com ]     Stopped: [ mysqldrbd:0 ] mystore        (ocf::heartbeat:Filesystem):    Started node2.ikki.com mysqld (lsb:mysqld):   Started node2.ikki.com vip    (ocf::heartbeat:IPaddr):        Started node2.ikki.com

Access SteppingStone remotely on the springboard)

# mysql -uroot -h192.168.1.20 -p

Set node1 to online and view the cluster status node1)

# crm node online# crm status============Last updated: Sun Sep 22 13:52:09 2013Stack: openaisCurrent DC: node1.ikki.com - partition with quorumVersion: 1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f2 Nodes configured, 2 expected votes4 Resources configured.============Online: [ node2.ikki.com node1.ikki.com ] Master/Slave Set: ms_mysqldrbd [mysqldrbd]     Masters: [ node2.ikki.com ]     Slaves: [ node1.ikki.com ] mystore        (ocf::heartbeat:Filesystem):    Started node2.ikki.com mysqld (lsb:mysqld):   Started node2.ikki.com vip    (ocf::heartbeat:IPaddr):        Started node2.ikki.com


This article from the "Don't dead birds a Hui" blog, please be sure to keep this source http://phenixikki.blog.51cto.com/7572938/1305252

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.