High availability of master nodes in MySQL master-slave architecture using MHA

Source: Internet
Author: User

Mha:master HA (High-availability scheme for master-slave architecture) only achieves high availability for the primary node, which is built on top of the MySQL master-slave replication structure, which means that the MYSQ must be configured in advance as a traditional replication cluster.

MHA when monitoring the failure of the master node, the slave node with the most recent data is promoted to be called the new master, during which time the MHA obtains additional information from the node to avoid consistency issues. The MHA also provides the ability to switch the master node online, switching master/salve nodes on demand.


The MHA service has two roles, MHA Manager (Management node) and MHA node (data node):

MHA Manager: Typically deployed separately on a single machine to manage multiple master/slave clusters, each master/slave cluster is called a application;

MHA Node: Runs on each MySQL server (master/slave/manager), which accelerates failover by monitoring scripts that have analysis and cleanup logs capabilities.


When MySQL replicates master failures in a cluster, MHA follows the steps below to fail over:

Once the primary node is dropped, a missing event on the primary node is not replicated to the slave node, and in order to avoid data inconsistency due to event loss, MHA saves a copy of the binary log event on the primary node on the management node, so that when the primary node is dropped, all events can be read from events saved on the management node , and find the location from the relay log in each slave node to determine which node is closer to the event of the master node, and the binary log event readout processing that manages the local backup redundancy of the management node is applied to the closest slave node, which synchronizes this slave node to the master node. And point the other from the node's main node to the nearest node that has been repaired, thereby elevating the new master from the node;

In order to complete this function MHA strongly relies on the SSH service, because it wants to continuously read various data from the main node through the SSH protocol, synchronizes the binary log events to the local, to ensure that the main node is offline and still get the binary log events;

Then, the event on the management node is replayed on the nearest slave node, and the same data as the main node of the dropped line is realized;


Experimental environment: Physical machine Win7, virtual machine 4 units CENTOS7

Node1:mha Manager

NODE2:MARIADB Master

NODE3:MARIADB slave

NODE4:MARIADB slave


The/etc/hosts file configuration content of each node is added:

192.168.255.2 node1.stu11.com Node1

192.168.255.3 node2.stu11.com Node2

192.168.255.4 node3.stu11.com Node3

192.168.255.5 node4.stu11.com Node4


First, configure a MSYQL master-slave replication structure:

Install MariaDB5.5, details slightly.


In Node2:

]# vim/etc/my.cnf

650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M01/82/90/wKioL1da6o2weWrgAABBXd7n6WY607.png "title=" 1.png " alt= "Wkiol1da6o2wewrgaabbxd7n6wy607.png"/>

]# systemctl Start Mariadb.service

To view the location of the binary log transaction:

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M01/82/90/wKioL1da6tXz0s09AAAqQiQiaqo259.png "title=" 2.png " alt= "Wkiol1da6txz0s09aaaqqiqiaqo259.png"/>

Note: Be sure to view the location of the binary log transaction before creating an account that has copy permissions: Both nodes also need to use this copy rights account, because one of them is likely to become the new master node; they also need to create an account with copy permission to copy the binary log from other nodes;


> GRANT REPLICATION slave,replication CLIENT on * * to ' repluser ' @ ' 192.168.255.% ' identified by ' replpass ';


> FLUSH privileges;


In Node3:

]# vim/etc/my.cnf

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M02/82/90/wKioL1da63iBLRLqAABOK3PVbiE285.png "title=" 3.png " alt= "Wkiol1da63iblrlqaabok3pvbie285.png"/>

]# systemctl Start Mariadb.service


> Change MASTER to master_host= ' 192.168.255.3 ', master_user= ' repluser ', master_password= ' Replpass ', Master_log_ File= ' master-bin.000005 ', master_log_pos=245;


> START SLAVE;

650) this.width=650; "src=" Http://s4.51cto.com/wyfs02/M00/82/90/wKioL1da7DuyOn4CAACqaPQW-kw900.png "title=" 4.png " alt= "Wkiol1da7duyon4caacqapqw-kw900.png"/>


In Node4:

]# vim/etc/my.cnf

650) this.width=650; "src=" Http://s1.51cto.com/wyfs02/M01/82/90/wKioL1da7MLBqYKvAABPuQ8ZZUA139.png "title=" 5.png " alt= "Wkiol1da7mlbqykvaabpuq8zzua139.png"/>

]# systemctl Start Mariadb.service


> Change MASTER to master_host= ' 192.168.255.3 ', master_user= ' repluser ', master_password= ' Replpass ', Master_log_ File= ' master-bin.000005 ', master_log_pos=245;


> START SLAVE;

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M02/82/92/wKiom1da7K_iZyRlAACiWXQX8s8889.png "title=" 6.png " alt= "Wkiom1da7k_izyrlaaciwxqx8s8889.png"/>


The above configuration completes the master-slave replication structure of MySQL.


In Node2: Create a user account with administrative privileges and be able to connect remotely;

> GRANT All on * * to ' mhauser ' @ ' 192.168.255.% ' identified by ' mhapass ';

> FLUSH privileges;



In Node1:

]# ssh-keygen-t Rsa-p "

In a host to generate a key pair, so that each of the remaining nodes have a private key to achieve mutual SSH communication;


Set the key permission to 600:

]# chmod go=. Ssh/authorized_keys


Copy the private key and authentication file to each node:

]# scp-p ssh/id_rsa. Ssh/authorized_keys node2:/root/.ssh

]# scp-p ssh/id_rsa. Ssh/authorized_keys node3:/root/.ssh

]# scp-p ssh/id_rsa. Ssh/authorized_keys node4:/root/.ssh


Node1 Installing the node and manager packages: node2,3,4 Installing the MHA node:

mha4mysql-manager-0.56-0.el6.noarch.rpm

mha4mysql-node-0.56-0.el6.noarch.rpm


]# Yum Install mha4mysql-*


]# Mkdir/etc/masterha

]# vim/etc/masterha/app1.cnf

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M02/82/90/wKioL1da8Byyx9_aAABAyaxRA3k246.png "title=" 7.png " alt= "Wkiol1da8byyx9_aaabayaxra3k246.png"/>


]# Masterha_check_ssh--conf=/etc/masterha/app1.cnf

Display: [INFO] All SSH connection tests passed successfully

Indicates the success of SSH mutual trust communication between nodes;


]# Masterha_check_repl--conf=/etc/masterha/app1.cnf

The last line of output information is similar to the following information, indicating that it passes detection:

MySQL Replication Health is OK


At this point, MHA high-availability MAYQL master-slave replication configuration is complete, can demonstrate when, node2 the main node dropped, by one of the node3,4 promoted to a new master node;


In Node2:

]# Killall mysqld Mysqld_safe


In Node4:

View, which can be displayed as a new master node has been replaced;

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/82/90/wKioL1da81_Al5NQAAAgPUK-7LA874.png "title=" 8.png " alt= "Wkiol1da81_al5nqaaagpuk-7la874.png"/>

When the main node is dropped, it automatically switches from the node to a new master node; At this time, MHA has completed the master-slave automatic switch of MySQL;


When the main node of the drop-down is added to the master-slave structure again, a backup is made on the new primary node, and the name and location of the log is recorded, and the contents of this backup are imported to the newly added slave node, so that the node can replicate from the specified location.




High availability of master nodes in MySQL master-slave architecture using MHA

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.