Corosync and pacemaker high-availability mariadb and Haproxy

Source: Internet
Author: User
Tags haproxy

The highly available solution keepalived just provides the simplest high-availability features, and the truly advanced features keepalived difficult to complete. Openais specifications provide a complete solution, but a lot of the weight of many features considered comprehensive, very detailed, to understand these we can be more in-depth understanding of the complete system of high availability, when encountering special high-availability scenarios We must use these solutions can be solved.

Openais-compliant Solutions

This specification has been iterated to today, forming a complete system 1.1

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>650 "this.width=650;" src= " Https://s2.51cto.com/wyfs02/M01/8D/A2/wKioL1ikY8rg4cZ1AAA20crbbzo909.png "title=" Image.png "alt=" Wkiol1iky8rg4cz1aaa20crbbzo909.png "/>

Figure 1.1

Since multiple hosts are going to form a cluster, there is a software to help inform the heartbeat information between multiple hosts, which is defined in the Openais specification as the message layer.

Service management for the entire cluster requires a software that is defined as cluster Resource Manager cluster resource management. This is only a cluster resource management, but the real time to perform operations requires an interface that can manage the host service, which is defined as Location Resource Manager local resource management. This layer is really performing operations, the reason for this layer is that basically all the service design is not considered to be highly available, but we have to manipulate these services to make a call to other servers in the management layer.

Message layer common components are: Heartbeatv1,2,3,corosync (SuSE development), Cman (Redhat)

Cluster resource Manager Common components are: pacemaker (Redhat), heartbeatv1,2

Management interface Common components: Crmsh (SuSE), PCs (Redhat)

Web Management interface: Hawk, LCMC, PACEMAKER-MGMT

CENTOS6 and RHCs.

Advanced Features

The reason why keepalived is simple is that the following

Quorum voting function

The number of legal votes (more than half of the total number of votes can also be set to no more than half), used to determine the cluster division of the scene, some nodes can continue to run as a cluster mode;

When a machine consisting of an even number of machines splits, voting can be made with the aid of arbitration equipment:

Ping node

Ping Node Group

Quorum Disk:qdisk

Without quorum, how to adopt the strategy of resource control:

Stopped

Ignore

Freeze

Suicide

Resource isolation mechanism

Storage corruption when a computer fails, prevents service jitter, or prevents write data to shared storage at the same time

Node Level: STONITH

Power switch

Service Hardware Management Module

Virtualization software shuts down virtual machines

Resource level:

Shared memory suppresses write data

Resource stickiness

Resources tend to remain in the current score (-oo, +oo)

Resource constraints

Position constraint: The tendency of a resource to run on a node (-oo, +oo)

INF: Positive Infinity

-inf: Negative Infinity

N:

-N:

Arrange constraints: Define how resources are biased between each other (whether they are together)

Inf:

-inf:

N,-N

Order constraint: The order of starting and closing of multiple resources belonging to the same service when running on the same node;

A-to B--C

C--and B--A

Resource Group Group: Intra-group resources will start and close in group order

Ra

Resource agent to truly manage service tools

Roughly a bit. Category Lsb,ocf,service,stonith,systemd

Because resources can be configured in such a flexible combination, then we can create an extreme application scenario, five computers provide four different services, we put five computers directly into a cluster, according to the resource stickiness of four services distributed on different computers, Then leave a computer to do backup for the other four computers. Not only that, when two computers in five computers were down, we could have four services running on three computers.

Corosync and Pacemaker

Practice

There are currently two more commonly used configuration interfaces one is Crmsh a PCs, here I first use CRMSH to build a mariadb high-availability cluster, and then use PCs to build a haproxy cluster, where both nodes are used. 1.2

650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>650 "this.width=650;" src= " Https://s2.51cto.com/wyfs02/M01/8D/A5/wKiom1ikY92Aah95AABBlT0-uag504.png "title=" Image [1].png "alt=" Wkiom1iky92aah95aabblt0-uag504.png "/>

Figure 1.2

vip172.16.29.11 between two node

Prepare the configuration

Configuration of the Node1

#下载crmsh的yum源的配置文件, thought Crmsh was not included in the source wget http://download.opensuse.org/repositories/network:/ha-clustering:/ Stable/centos_centos-7/network:ha-clustering:stable.repo-p/etc/yum.repos.d/yum Install pacemaker Crmsh Mariadb-server Haproxy pcs-y
vim /etc/corosync/corosync.conf# configuration file Case totem {    version: 2     crypto_cipher: aes256    crypto_hash: sha1     interface {        ringnumber: 0# cluster host's network segment          bindnetaddr: 172.16.0.0# Multicast Address          mcastaddr: 239.255.101.99        mcastport: 5405         ttl: 1    }}logging {     fileline: off    to_stderr: no    to_logfile:  yes    logfile: /var/log/cluster/corosync.log    to_syslog:  no    debug: off    timestamp: on     logger_subsys {        subsys: QUORUM         Debug: off    }}quorum {    provider: corosync_ Votequorum} #配置组成集群的两个主机nodelist  {    node {         ring0_addr: 172.16.29.10        nodeid: 1     }    node {        ring0_ addr: 172.16.29.20        nodeid: 2    } }
 #使用这个命令生成corosync通信的秘钥corosync-keygenvim /etc/hosts# Add domain name resolution 127.0.0.1    localhost localhost.localdomain localhost4 localhost4.localdomain4::1          localhost localhost.localdomain localhost6  localhost6.localdomain6172.16.29.10 node1.org node1172.16.29.20 node2.org node2# Use secret key communication between hosts ssh-keygen -t rsa  #这个命令一路按回车就可以了ssh-copy-id -i .ssh/id_rsa.pub node2# Copy the configuration file to Node2, this step needs to be executed after Node2 install pacemaker SCP -P AUTHKEY COROSYNC.CONF NODE2:/ETC/COROSYNC/SCP  /etc/hosts node2:/etc/
#开启服务, and authorize users to log in, create Database jimsystemctl start Mariadb.servicemysql <<eofgrant all on * * to ' tom ' @ ' 172.16.%.% ' identified By ' 12345 '; Create database jim;eof# setup mariadb boot, this is pacemaker control SYSTEMD managed services at the root Systemctl enable mariadb.service# Start service systemctl start Pacemaker.service Corosync.service

Configuration of the Node2

#下载crmsh的yum源的配置文件, thought Crmsh was not included in the source wget http://download.opensuse.org/repositories/network:/ha-clustering:/ Stable/centos_centos-7/network:ha-clustering:stable.repo-p/etc/yum.repos.d/yum Install pacemaker Crmsh Mariadb-server pcs-y# Open the service and authorize users to log in, create a database tom, two hosts database deliberately created not as easy to view server migrations Systemctl start Mariadb.servicemysql << Eofgrant all on * * to ' tom ' @ ' 172.16.%.% ' identified by ' 12345 '; Create database tom;eof# set MARIADB boot, This is the Pacemaker control SYSTEMD Management Service fundamental systemctl enable mariadb.service# start service Systemctl start Pacemaker.service Corosync.service

MARIADB Cluster Service configuration

The configuration of the MARIADB Cluster service is configured using the CRMSH configuration, and the Crmsh configured Cluster service automatically synchronizes to each node of the cluster, and the mechanism for synchronization is that the configuration information is first routed to the DC node and then to the other nodes.

The use of CRMSH is to enter the CRM into the interface and then enter the configuration, this configuration interface and switch configuration interface using similar, and has a powerful complement function. Specifically how to use I do not introduce, you can use Help [command] to view the assistance information, help information at the bottom of the case.

CRM Configure #进入crm配置接口property Stonith-enabled=false #关闭stonith because we don't have a specific device property No-quorum-policy=ignore # Do not use legal votes because we are configuring a two-node cluster, one node failure the other node has no more votes than half primitive VIP ocf:heartbeat:IPaddr params ip= "172.16.29.11" # Define VIP cluster Resources primitive mariadb systemd:mariadb #定义mariadb集群资源group dbservice vip mariadb #把vip和mariadb资源定义为组, A group of two resources running on the same host monitor VIP 20s:20s #定义vip的监控, monitor every 20s, delay 20smonitor mariadb 20s:20scommit #保存配置, and make it effective

Haproxy Cluster Service configuration

Haproxy cluster configuration using PCS,PCS is not an interactive management interface, we can directly enter Management command management

Configuration of the Node1

Systemctl start Pcsd.service #开启pcsd进程systemctl enable Haproxy.service #设置haproxy开机启动, This is the Pacemaker control SYSTEMD Management Service fundamental vim/etc/haproxy/haproxy.cfg #在default中添加如下内容 stats enable stats hide-version stats Uri/ha10

Configuration of the Node2

Systemctl start Pcsd.service #开启pcsd进程systemctl enable Haproxy.service #设置haproxy开机启动, This is the Pacemaker control SYSTEMD Management Service fundamental vim/etc/haproxy/haproxy.cfg #在default中添加如下内容 stats enable stats hide-version stats Uri/ha20

Configuration configuration of the cluster

PCS resource Create vip2 ocf:heartbeat:IPaddr params ip= "172.16.29.12" OP monitor interval=20s timeout=20 #定义vip2资源, and defines monitoring PCS resource Create Haproxy systemd:haproxy op monitor interval=20s timeout=20 #定义haproxy资源, and defines monitoring PCs constraint order set VIP Haproxy #使用顺序约束把vip2和haproxy绑定在一起

Test

Then we can test the high availability based on the IP address of the two host computers.

This article is from "Lao Wang Linux Journey" blog, please be sure to keep this source http://oldking.blog.51cto.com/10402759/1898250

Corosync and pacemaker high-availability mariadb and Haproxy

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.