MySQL High-availability configuration

Source: Internet
Author: User
Tags echo b rabbitmq

OpenStack Highly Available configuration documentation

Describes the highly available OpenStack cloud platform with two physical hosts, which mainly describes the highly available configuration of MySQL and RABBITMQ.

installing the openstack Cloud Platform (OpenStack I version deployed on two hosts )

    • Installation of the CentOS operating system, due to the high availability of services such as MySQL in active-passive mode, requires a separate hard disk partition to mount and synchronize data for services such as MySQL, so when installing the system, In the system image configuration file, set aside 2-3 pieces of 20G or so separate hard disk partition as backup;
    • Yum–y Update && reboot system after installation is complete;
    • Install OpenStack I version with Redhat's RDO tool

Yum Install-y http://rdo.fedorapeople.org/rdo-release.rpm

Yum Install-y Openstack-packstack

Packstack--gen-answer-file=ans-file.txt

Modify Answer-file as needed; Then Packstack--answer-file=ans-file.txt.

After the completion of the check for OpenStack I installed successfully after the successful installation of one host as the primary node, and the other host as a standby node to configure the high availability of each service;

To configure the high availability of the OpenStack service, first configure a highly available cluster of two nodes:

L Configure each node to parse each other

Gb07

Gb06

L Configure time synchronization for each node

Gb07

[Email protected] ~]# ntpdate 10.10.102.7

Gb06

[Email protected] ~]# ntpdate 10.10.102.7

L each node shuts down SELinux

Modify the/etc/selinux/config file to set selinux=disabled, and then restart the server.

L Corosync installation and Configuration ( Install configuration on two node )

    • Installing Corosync

Gb07

[email protected] ~]# Yum install-y corosync

Gb06

[email protected] ~]# Yum install-y corosync

    • . Configuring Corosync

[Email protected] ~]# cd/etc/corosync/

[Email protected] corosync]# MV Corosync.conf.example corosync.conf

[Email protected] corosync]# vim corosync.conf

Compatibility:whitetank

Totem {#心跳信息传递层

Version:2 #版本

Secauth:on #认证信息 General on

threads:0 #线程

interface {#定义心跳信息传递的接口

ringnumber:0

bindnetaddr:10.10.0.0 #绑定的网络地址, write network address

mcastaddr:226.94.1.1 #多播地址

mcastport:5405 #多播的端口

Ttl:1 #生存周期

}

}

Logging {#日志

Fileline:off

To_stderr:no #是否输出在屏幕上

To_logfile:yes #定义自己的日志

To_syslog:no #是否由syslog记录日志

LogFile:/var/log/cluster/corosync.log #日志文件的存放路径

Debug:off

Timestamp:on #时间戳是否关闭

Logger_subsys {

Subsys:amf

Debug:off

}

}

AMF {

Mode:disabled

}

Service {

ver:0

Name:pacemaker #pacemaker作为corosync的插件进行工作

}

aisexec {

User:root

Group:root

}

[Email protected] corosync]# SCP corosync.conf gb06:/etc/corosync/

    • Certification documents

[Email protected] corosync]# Corosync-keygen

Corosync Cluster Engine authentication key generator.

Gathering 1024x768 bits for key from/dev/random.

Press keys on your keyboard to generate entropy (bits = 152).

#遇到这个情况, indicates that the computer is not enough random numbers, you can constantly tap the keyboard to generate random numbers

[Email protected] corosync]# SCP Authkey gb06:/etc/corosync/

#把认证文件也复制到gb06主机上

L Pacemaker Configuration and installation (configuration is installed on two nodes)

    • Installing pacemaker

Gb07

[email protected] ~]# Yum install-y pacemaker

Gb06

[email protected] ~]# Yum install-y pacemaker

    • Installing CRMSH

Gb07

[[email protected] ~]# yum–y Install CRM

Gb06

[[email protected] ~]# yum–y Install CRM

After the installation is complete, start Corosync service Corosync start; Start pacemaker

Service Pacemaker Start

L DRBD installation and Configuration ( Install configuration on all two nodes )

    • Installing DRBD

Gb07

[[email protected] ~]# yum -y install drbd84 kmod-drbd84

Gb06

[[email protected] ~]# yum -y install drbd84 kmod-drbd84

If the Yum source cannot find the package, try to search the Web for DRBD installation, download the appropriate Yum source files and install

    • Configuring DRBD

[Email protected] ~]# cat/etc/drbd.d/global_common.conf

Global {

Usage-count No;

# Minor-count Dialog-refresh disable-ip-verification

}

Common {

Protocol C;

Handlers {

Pri-on-incon-degr "/USR/LIB/DRBD/NOTIFY-PRI-ON-INCON-DEGR.SH; /usr/lib/drbd/notify-emergency-reboot.sh; echo B >/proc/sysrq-trigger; Reboot-f ";

PRI-LOST-AFTER-SB "/USR/LIB/DRBD/NOTIFY-PRI-LOST-AFTER-SB.SH; /usr/lib/drbd/notify-emergency-reboot.sh; echo B >/proc/sysrq-trigger; Reboot-f ";

Local-io-error "/USR/LIB/DRBD/NOTIFY-IO-ERROR.SH; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o >/proc/sysrq-trigger; Halt-f ";

# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";

# split-brain "/usr/lib/drbd/notify-split-brain.sh root";

# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";

# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh-p---c 16k";

# after-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;

}

startup {

#wfc-timeout 120;

#degr-wfc-timeout 120;

}

Disk {

On-io-error detach;

#fencing resource-only;

}

NET {

Cram-hmac-alg "SHA1";

Shared-secret "Mydrbdlab";

}

Syncer {

Rate 1000M;

}

}

[email protected] drbd.d]# Cat Mysql.res # Resource configuration file

Resource MySQL {

On gb07 {

device/dev/drbd0;

Disk/dev/sda3; #预留的硬盘分区

Meta-disk internal;

Address IPv4 10.10.102.7:7700;

}

On gb06 {

device/dev/drbd0;

Disk/dev/sda3;

Meta-disk internal;

Address IPv4 10.10.102.6:7700;

}

}

[Email protected] drbd.d]# SCP global_common.conf mydata.res gb06:/etc/drbd.d/

    • Initialize the resources of DRBD and start

The initialization of the DRBD metadata and the initial setting of the metadata are written to/dev/data/mysql, which must be done on two nodes

Create/DEV/DRBD0 device node, connect the DRBD device to a local storage device, must be completed on two nodes

Synchronize the initial device so that the device becomes the primary role (writable and readable). See the DRBD User Guide for a more detailed description of the primary and secondary roles of the DRBD resource. Only one node can be completed, that is, you will continue to create the file system node

    • Format the DRBD partition (completed on the master node)

Mkfs-t xfs/dev/drbd0

Mount/dev/drbd0/mysql

L Add MySQL resources to pacemaker

    • Define the DRBD Resource

[[email protected] ~]# CRM

CRM (live) # Configure

CRM (Live) configure# property Stonith-enabled=false

CRM (Live) configure# property No-quorum-policy=ignore

CRM (Live) configure# primitive MYSQLDRBD OCF:LINBIT:DRBD params drbd_resource=mysql op monitor role=master interval=10 Timeout=20 op monitor role=slave interval=20 timeout=20 op start timeout=240 op stop timeout=100

CRM (live) Configure#verify #检查语法

    • Define the master and slave resources for DRBD

CRM (Live) configure# Ms MS_MYSQLDRBD MYSQLDRBD Meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify= True

CRM (Live) configure# verify

    • Defining file system resources and constraint relationships

CRM (Live) configure# primitive mystore ocf:heartbeat:Filesystem params device= "/dev/drbd0" directory= "/mysql" fstype= " XFS "OP monitor interval=40 timeout=40 op start timeout=60 op stop timeout=60

CRM (Live) configure# verify

CRM (Live) configure# colocation MYSTORE_WITH_MS_MYSQLDRBD Inf:mystore ms_mysqldrbd:master

CRM (Live) configure# order Ms_mysqldrbd_before_mystore Mandatory:ms_mysqldrbd:promote Mystore:start

CRM (Live) configure# verify

    • Define resource constraint relationships for VIP resources and MySQL Services

CRM (Live) configure# primitive myvip ocf:heartbeat:IPaddr params ip= "10.10.42.96" OP monitor interval=20 timeout=20 On-fail=restart

CRM (Live) configure# primitive MyServer lsb:mysqld op monitor interval=20 timeout=20 On-fail=restart

CRM (Live) configure# verify

CRM (Live) configure# colocation myserver_with_mystore inf:myserver mystore

CRM (Live) configure# order Mystore_before_myserver Mandatory:mystore:start Myserver:start

CRM (Live) configure# verify

CRM (Live) configure# colocation myvip_with_myserver inf:myvip myserver

CRM (Live) configure# order Myvip_before_myserver MANDATORY:MYVIP myserver

CRM (Live) configure# verify

CRM (Live) configure# commit

After commit, you can view the running state of the node, switch nodes, and see if the resources are transferred.

    • Turn off the services of DRBD and turn off MySQL services

MySQL , DRBD is a cluster of resources, from the cluster management of resources to boot is certainly not able to start itself.

[[Email protected] ~] #chkconfig mysqld off

[[Email protected] ~] #chkconfig DRBD off

[[Email protected] ~] #chkconfig mysqld off

[[Email protected] ~] #chkconfig Drdb off

    • Ø for high Availability Mysql Configuration Openstack Service

Now, OpenStack service must point to MySQL configuration high availability, virtual cluster IP address-instead of the physical IP address of the MySQL server as usual.

For glance in OpenStack, if the IP address of the MySQL service is 10.10.102.7, the following lines will be used in the OpenStack image registration configuration file (glance-registry.conf):

Sql_connection = Mysql://glancedbadmin:<password>@10.10.42.96/glance

L RABBITMQ High-availability configuration

RABBITMQ's high availability uses the mirrored queue, does not require additional software packages, the configuration is relatively simple, rely on the following two blog posts:

Http://blog.sina.com.cn/s/blog_959491260101m6ql.html

Http://www.cnblogs.com/flat_peach/archive/2013/04/07/3004008.html

L Problems you may encounter

    • Brain crack failure

Under normal conditions, view the connection status of the DRBD resource in the cluster as:

However, due to network or machine failure may occur in the DRBD brain crack failure, the cluster's DRBD resource connection is interrupted:

0:mysql/0 StandAlone secondary/unknown uptodate/--C r-----

Workaround View Official link http://www.drbd.org/users-guide/s-resolve-split-brain.html

    • Cluster Manager Pacemaker Unable to start MySQL, boot failed, control node memory is enough, MySQL has a plug-in innodb,mysqld startup InnoDB plug-in Initializes a memory buffer pool, About 2.3G, if there is not enough memory, mysqld boot will fail;
    • Occasionally a node hangs, after the repair is completed, the CRM resource list sees that the resource state of the node is still stopped, unable to start, may need to restart DRBD and pacemaker;

MySQL High-availability configuration

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.