Linux corosync+pacemaker+drbd+mysql Configuration installation detailed _linux

Source: Internet
Author: User
Tags failover sha1 ssh iptables mysql login

Introduction to basic environment and basic Environment configuration

Node 1:node1.hulala.com 192.168.1.35 centos6.5_64 add 8G new Hard drive
Node 2:node1.hulala.com 192.168.1.36 centos6.5_64 add 8G new Hard drive
VIP 192.168.1.39

Both node 1 and Node 2 need to be configured

Modify Host Name:

Vim/etc/sysconfig/network
Hostname=node1.hulala.com

Configure hosts resolution:

Vim/etc/hosts
192.168.1.35 node1.hulala.com Node1
192.168.1.36 node2.hulala.com Node1
Sync system Time:
Ntpdate cn.pool.ntp.org
Shutting down firewalls and SELinux

Service Iptables Stop
Chkconfig iptables off
Cat/etc/sysconfig/selinux
Selinux=disabled

Configuration is required on all two nodes and two nodes are restarted after configuration is complete

Two: Configure SSH mutual trust

[root@node1~] #ssh-keygen-t rsa-b 1024
[root@node1~] #ssh-copy-id root@192.168.1.36
[root@node2~] #ssh-keygen-t rsa-b 1024
[root@node2~] #ssh-copy-id root@192.168.1.35

III: DRBD Installation and configuration (Node1 and Node2 perform the same operation)

[root@node1~] #wget-C http://elrepo.org/linux/elrepo/el6/x86_64/RPMS/drbd84-utils-8.4.2-1.el6.elrepo.x86_64.rpm
[root@node1~] #wget-C http://elrepo.org/linux/elrepo/el6/x86_64/RPMS/kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64.rpm
[root@node1~] #rpm-IVH *.rpm

Gets a SHA1 value as Shared-secret

[root@node1~] #sha1sum/etc/drbd.conf
8a6c5f3c21b84c66049456d34b4c4980468bcfb3/etc/drbd.conf

To create and edit a resource profile:/etc/drbd.d/dbcluster.res

Copy Code code as follows:

[root@node1~]# Vim/etc/drbd.d/dbcluster.res
Resource Dbcluster {
Protocol C;
NET {
Cram-hmac-alg SHA1;
Shared-secret "8A6C5F3C21B84C66049456D34B4C4980468BCFB3";
After-sb-0pri discard-zero-changes;
After-sb-1pri discard-secondary;
After-sb-2pri Disconnect;
Rr-conflict Disconnect;
}
device/dev/drbd0;
DISK/DEV/SDB1;
Meta-disk internal;
On node1.hulala.com {
Address 192.168.1.35:7789;
}
On node2.hulala.com {
Address 192.168.1.36:7789;
}
}

Parameter description for the above configuration:

RESOURCE: Resource Name
PROTOCOL: Use protocol "C" to indicate "synchronized", that is, when a remote write acknowledgement is received, the write is considered complete.
NET: The SHA1 key for two nodes is the same
After-sb-0pri: "Split Brain" occurs when there is no data change, the normal connection between two nodes
AFTER-SB-1PRI: If there is a data change, then discard the secondary device data and sync from the main device
Rr-conflict: If the previous settings cannot be applied and the DRBD system has role conflicts, the system automatically disconnects between nodes
Meta-disk:meta data is saved on the same disk (SDB1)
on <node>: node that makes up the cluster
Copy the DRBD configuration to the node machine:

[root@node1~] #scp/etc/drbd.d/dbcluster.res root@192.168.1.36:/etc/drbd.d/

To create a resource and file system:

create partition (Unformatted)
To create an LVM partition on Node1 and Node2:
[root@node1~] #fdisk/dev/sdb
Create Meta data for resources (Dbcluster) on Node1 and Node2:
[ROOT@NODE1~DRBD] #drbdadm CREATE-MD dbcluster
Activate resources (both Node1 and Node2 have to be viewed)
– First make sure the DRBD module is loaded
To see whether to load:
# Lsmod | grep DRBD
If it is not loaded, it needs to be loaded:
# modprobe DRBD
# Lsmod | grep DRBD
DRBD 317261 0
LIBCRC32C 1246 1 DRBD
– Start the DRBD background process:
[Root@node1 drbd]# drbdadm up Dbcluster
[Root@node2 drbd]# drbdadm up Dbcluster
View (Node1 and Node2) DRBD status:
[Root@node2 drbd]#/ETC/INIT.D/DRBD Status
git-hash:7ad5f850d711223713d6dcadc3dd48860321070c build by DAG@BUILD64R6, 2012-09-06 08:16:10
M:res CS ro ds p mounted fstype
0:dbcluster Connected secondary/secondary inconsistent/inconsistent C
As you can see from the above information, the DRBD service is already running on two machines, but any machine is not a host ("primary" host) and therefore cannot access the resource (block device).
Start Sync:
Operate only on the primary node (here is Node1)
[Root@node1 drbd]# Drbdadm-–overwrite-data-of-peer Primary Dbcluster
To view the sync status:
[Root@node1 drbd.d]# CAT/PROC/DRBD
version:8.4.2 (api:1/proto:86-101)
git-hash:7ad5f850d711223713d6dcadc3dd48860321070c build by DAG@BUILD64R6, 2012-09-06 08:16:10
0:cs:connected ro:primary/secondary ds:uptodate/uptodate C r-–
ns:8297248 nr:0 dw:0 dr:8297912 al:0 bm:507 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
Some instructions for the output above:
CS (connection state): Network connection Status
RO (roles): The Role of the node (the role of this node is displayed first)
DS (Disk States): Status of the hard drive
Replication protocol: A, B or C (this configuration is C)
Seeing the DRBD status of "cs:connected ro:primary/secondary ds:uptodate/uptodate" means that the synchronization is over.
You can also view the DRBD status like this:
[root@centos193 drbd]# Drbd-overview
0:dbcluster/0 Connected secondary/primary uptodate/uptodate C r-–
To create a file system:
To create a file system on the Master node (NODE1):
[Root@node1 drbd]# mkfs-t ext4/dev/drbd0
MKE2FS 1.41.12 (17-may-2010)
FileSystem label=
OS Type:linux
Block size=4096 (log=2)
.......
180 days, whichever comes. Use Tune2fs-c or-i to override.
Note: It is not necessary to do the same with the auxiliary node (NODE2), because DRBD will process the synchronization of the original disk data.
In addition, we do not need to mount the DRBD system to any one machine (and, of course, installing MySQL requires a temporary mount to install MySQL) because the cluster management software will handle it. Also, make sure that the copied file system is only mounted on the active primary server.

Third: the installation of MySQL

1, install MySQL at node1 and Node2 nodes:
Yum Install mysql*-y
2.node1 and Node2 both operate to stop the MySQL service
[root@node1~]# service MySQL Stop
Shutting down MySQL. [OK]
3.node1 and Node2 both operate to create the database directory and modify the directory permissions to MySQL
[Root@host1/]# mkdir-p/mysql/data
[Root@host1/]# chown-r Mysql:mysql/mysql
4, close MySQL temporarily mount DRBD file system to master node (NODE1):
[Root@node1 ~]# mount/dev/drbd0/mysql/
5.node1 and Node2 Both manipulate modify MY.CNF file modifications
Add a new data store path under [mysqld]
Datadir=/mysql/data
7. All files and directories under the default data path are CP to the new directory (Node2 not operate)
[Root@host1 MySQL] #cd/var/lib/mysql
[Root@host1 MySQL] #cp-R */mysql/data/
Node1 and Node2 both operate here note Copy the past directory permissions are the main need to modify the MySQL, here directly modify the MySQL directory.
[Root@host1 mysql]# chown-r Mysql:mysql/mysql
8. Start the MySQL on the Node1 for landing test
[Root@host1 mysql]# MySQL
9. Uninstall DRBD file System at node Node1
[Root@node1 ~]# UMOUNT/VAR/LIB/MYSQL_DRBD
[Root@node1 ~]# Drbdadm Secondary Dbcluster
Mount the DRBD File system node Node2
[Root@node2 ~]# Drbdadm Primary Dbcluster
[Root@node2 ~]# mount/dev/drbd0/mysql/
Configure MySQL on node Node2 and test
[Root@node1 ~]# SCP node2:/etc/my.cnf/etc/my.cnf
[Root@node2 ~]# chown mysql/etc/my.cnf
[Root@node2 ~]# chmod 644/etc/my.cnf
MySQL login test on Node2
[Root@node2 ~]# MySQL
11. Uninstall the DRBD file system on the Node2 and submit it to the cluster management software pacemaker to manage
[root@node2~]# UMOUNT/VAR/LIB/MYSQL_DRBD
[root@node2~]# Drbdadm Secondary Dbcluster
[root@node2~]# Drbd-overview
0:dbcluster/0 Connected secondary/secondary uptodate/uptodate C r-–
[root@node2~]#

IV: Installation of Corosync and pacemaker (Node1 and node2 are required)

Installation pacemaker must rely on:
[root@node1~] #yum-y install automake autoconf libtool-ltdl-devel pkgconfig python glib2-devel libxml2-devel Libxslt-devel python-devel gcc-c++ bzip2-devel gnutls-devel pam-devel
Install cluster stack dependencies:
[Root@node1~]yum-y Install Clusterlib-devel Corosynclib-devel
Install pacemaker Optional dependencies:
[root@node1~]yum-y install ncurses-devel openssl-devel cluster-glue-libs-devel docbook-style-xsl

Pacemaker Installation:

[Root@node1~]yum-y Install Pacemaker

CRMSH Installation:

[Root@node1~]wget http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/ Network:ha-clustering:stable.repo
[Root@node1~]yum-y Install Crmsh

1, configure Corosync
Corosync Key
– Key to generate secure communication between nodes:
[root@node1~]# Corosync-keygen
– Copy the Authkey to the Node2 node (the permission to keep Authkey is 400):
[root@node~]# Scp/etc/corosync/authkey node2:/etc/corosync/
2,[root@node1~]# cp/etc/corosync/corosync.conf.example/etc/corosync/corosync.conf
Edit/etc/corosync/corosync.conf:

Copy Code code as follows:

# Please read the COROSYNC.CONF.5 manual page
Compatibility:whitetank
aisexec {
User:root
Group:root
}
Totem {
Version:2
Secauth:off
threads:0
interface {
ringnumber:0
bindnetaddr:192.168.1.0
mcastaddr:226.94.1.1
mcastport:4000
Ttl:1
}
}
Logging {
Fileline:off
To_stderr:no
To_logfile:yes
To_syslog:yes
LogFile:/var/log/cluster/corosync.log
Debug:off
Timestamp:on
Logger_subsys {
Subsys:amf
Debug:off
}
}
AMF {
Mode:disabled
}

– Create and edit/etc/corosync/service.d/pcmk, add "pacemaker" service
[root@node1~]# CAT/ETC/COROSYNC/SERVICE.D/PCMK
Service {
# Load The Pacemaker Cluster Resource Manager
Name:pacemaker
Ver:1
}
Copy the top two configuration files to another node
[root@node1]# scp/etc/corosync/corosync.conf node2:/etc/corosync/corosync.conf
[root@node1]# SCP/ETC/COROSYNC/SERVICE.D/PCMK NODE2:/ETC/COROSYNC/SERVICE.D/PCMK
3, start Corosync and pacemaker.
Start Corosync and check on two nodes, respectively.
[root@node1]#/etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [OK]
[root@node1~]# Corosync-cfgtool-s
Printing ring status.
Local Node ID-1123964736
Ring ID 0
id = 192.168.1.189
Status = Ring 0 active with no faults
[root@node2]#/etc/init.d/corosync start
Starting Corosync Cluster Engine (corosync): [OK]
– Start pacemaker separately on both nodes:
[root@node1~]#/etc/init.d/pacemaker start
Starting Pacemaker Cluster Manager: [OK]
[root@node2~]#/etc/init.d/pacemaker start
Starting Pacemaker Cluster Manager:

Resource configuration

Configuring Resources and Constraints
Configure default Properties
To view a configuration that already exists:
[Root@node1 ~]# CRM Configure show
Node node1.hulala.com
Node node2.hulala.com
Property $id = "Cib-bootstrap-options" dc-version= "1.1.8-7.el6-394e906" cluster-infrastructure= "Classic Openais (with Plugin) "expected-quorum-votes=" 2 "
Prohibit Stonith error:
[Root@node1 ~]# CRM Configure property Stonith-enabled=false
[Root@node1 ~]# Crm_verify-l
Let the cluster ignore quorum:
[root@node1~]# CRM Configure Property No-quorum-policy=ignore
Prevent resources from moving after recovery:
[root@node1~]# CRM Configure Rsc_defaults resource-stickiness=100
To set the default timeout for an operation:
[root@node1~]# CRM Configure Property default-action-timeout= "180s"
To set whether the default startup failure is fatal:
[root@node1~]# CRM Configure Property start-failure-is-fatal= ' false '
Configuring DRBD Resources
– Stop DRBD before configuring:
[root@node1~]#/ETC/INIT.D/DRBD stop
[root@node1~]#/ETC/INIT.D/DRBD stop
– Configure DRBD Resources:
[root@node1~]# CRM Configure
CRM (Live) configure# primitive p_drbd_mysql OCF:LINBIT:DRBD params drbd_resource= "dbcluster" OP monitor interval= "15s" Op start timeout= "240s" Op stop timeout= "100s"
– Configure the DRBD resource master-slave relationship (definition has only one master node):
CRM (Live) configure# Ms Ms_drbd_mysql P_drbd_mysql Meta master-max= "1" master-node-max= "1" clone-max= "2" Clone-node-max = "1" notify= "true"
– Configure file system resources to define mount point:
CRM (Live) configure# primitive p_fs_mysql ocf:heartbeat:Filesystem params device= "/dev/drbd0" directory= mysql_drbd/"fstype=" Ext4 "
Configuring VIP Resources
CRM (Live) configure# primitive p_ip_mysql ocf:heartbeat:IPaddr2 params ip= "192.168.1.39" cidr_netmask= "OP" monitor Interval= "30s"
Configuring MySQL Resources
CRM (Live) configure# primitive p_mysql lsb:mysql op monitor interval= "20s" timeout= "30s" op start interval= "0" timeout= "18 0s "op stop interval=" 0 "timeout=" 240s "
Group Resources and constraints
Ensure that the drbd,mysql and VIPs are on the same node (Master) and determine the start/stop order of the resources through the group.
Start: P_fs_mysql–>p_ip_mysql->p_mysql
Stop: P_mysql–>p_ip_mysql–>p_fs_mysql
CRM (Live) configure# group G_mysql p_fs_mysql P_ip_mysql P_mysql
Group Group_mysql always only on master node:
CRM (Live) configure# colocation C_MYSQL_ON_DRBD inf:g_mysql ms_drbd_mysql:master
The start of MySQL is always after DRBD master:
CRM (Live) configure# order O_drbd_before_mysql Inf:ms_drbd_mysql:promote G_mysql:start

Configure check and submit

CRM (Live) configure# verify
CRM (Live) configure# commit
CRM (live) configure# quit

View cluster status and failover tests

Status View:
[Root@node1 mysql]# crm_mon-1r
Failover test:
Set Node1 to standby state
[Root@node1 ~]# CRM node standby
View the cluster status in a few minutes (if the switch succeeds, see the following status):
[Root@node1 ~]# CRM Status
Restore Node1 to Online status:
[Root@node1 mysql]# CRM node online
[Root@node1 mysql]# CRM Status

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.