MySQL high-availability solution---MHA

Source: Internet
Author: User

One master two from one administration, altogether four devices

The role of MHA is to make master high-availability, when the primary node MySQL failure, will be the primary node data closest to the node promoted to the primary node, and if other from the node has updated data will also be synchronized to this "quasi-master node." If the primary node has data already committed but all the slave nodes have not yet replicated, then the data can only be rolled back after being promoted from node to primary, there is no other way.


Preparation matters:

1. Synchronization Time

Ntpdate 172.18.0.1

2, configure the host domain name, the main node is node1 on the operation

vim/etc/hosts192.168.1.101 Node1 # Master node 192.168.1.106 node2 # Slave node 192.168.1.107 node3 # Slave node 192.168.1.100 node4 #managerscp /etc/hosts node2:/etc/hostsscp/etc/hosts node3:/etc/hostsscp/etc/hosts node4:/etc/hosts

3, MHA need SSH no key verification

Ssh-keygen-t rsa-p ' cd/root/.ssh/ssh-copy-id-i./id_rsa.pub [email protected]scp id_rsa{,.pub} authorized_keys nod E2:/ROOT/.SSHSCP id_rsa{,.pub} authorized_keys node3:/root/.sshscp id_rsa{,.pub} authorized_keys node4:/root/.ssh


Here's how to proceed:

1, configure the master-slave replication cluster

NODE1:VIM/ETC/MY.CNF.D/SERVER.CNF [Server] skip_name_resolve=on innodb_file_per_table=on server_id = 1 Log_bin = ma Ster-log relay_log = relay-log # The primary node also configures the trunk log, because the primary node is called from the node when the failure is resumed
NODE2:VIM/ETC/MY.CNF.D/SERVER.CNF [Server] skip_name_resolve=on innodb_file_per_table=on server_id = 2 Relay_log = r Elay-log Log_bin = Master-log Relay_log_purge = OFF read_only = on node3:vim/etc/my.cnf.d/server.cnf [Server] Ski  P_name_resolve=on Innodb_file_per_table=on server_id = 3 Relay_log = Relay-log Log_bin = Master-log Relay_log_purge = OFF read_only = On

Open service

Systemctl Start Mariadb.service


2. License the replication account at the master node

GRANT REPLICATION slave,replication CLIENT on * * to ' repluser ' @ ' 192.168.1.% ' identified by ' CentOS ';

Management account of the Master Node Authorization management device

GRANT all on * * to ' mhaadmin ' @ ' 192.168.1.% ' identified by ' CentOS ';

Write to disk

FLUSH privileges;


3, in the slave node configuration

Change MASTER to master_host= ' 192.168.1.101 ', master_user= ' repluser ', master_password= ' CentOS ', master_log_file= ' Master-log.000003 ', master_log_pos=245; START SLAVE; SHOW SLAVE Status\g; SELECT USER from Mysql.user; #能看到复制授权账户和管理账户已经同步


4. Install the MHA package

Installing the Manager and node packages on the manager node

Yum-y Install mha4mysql-manager-0.56-0.el6.noarch.rpmyum-y Install mha4mysql-node-0.56-0.el6.noarch.rpm


5. Install node package on node

Yum-y Install mha4mysql-node-0.56-0.el6.noarch.rpm


6. Configure on the manager

Mkdir/etc/masterhavim/etc/masterha/app1.cnf[server default]user=mhaadminpassword=centosmanager_workdir=/data/ Masterha/app1manager_log=/data/masterha/app1/manager.logremote_workdir=/data/masterha/app1ssh_user=rootrepl_ user=repluserrepl_password=centosping_interval=1[server1]hostname=192.168.1.101ssh_port=22candidate_master=1[ Server2]hostname=192.168.1.106ssh_port=22candidate_master=1[server3]hostname=192.168.1.107ssh_port=22candidate _master=1


7. Check the configuration and start the service

Check Masterha_check_ssh--conf=/etc/masterha/app1.cnfmasterha_check_repl--conf=/etc/masterha/app1.cnf Start the manager server Masterha_manager--conf=/etc/masterha/app1.cnf


8. Testing

Simulation of primary node failure at this time

SHOW MASTER STATUS; Show SLAVE status;# View from node information from node, at which point a slave node has been upgraded to the primary node


9, repair the original node

Vim/etc/my.cnf.d/server.cnf #添加两行relay_log_purge = Offread_only = On again turn on service on line systemctl start Mariadb.servicechange MA STER to master_host= ' 192.168.1.106 ', master_user= ' repluser ', master_password= ' CentOS ', master_log_file= ' Master-log.000003 ', master_log_pos=320; START SLAVE; SHOW SLAVE status\g;# The primary node at this point is the alternate Origin node


10. Check the copy function on the manager

Masterha_check_repl--conf=/etc/masterha/app1.cnf appears as follows, the master-slave has switched, and the original node is also changed from the node 192.168.1.106 ( 192.168.1.106:3306) (current master) +--192.168.1.101 (192.168.1.101:3306) +--192.168.1.107 (192.168.1.107:3306)


11. Start MHA again

Nohup masterha_manager--conf=/etc/masterha/app1.cnf &>/data/masterha/app1/manager.log & #在后台执行, and peel off the relationship with the current terminal # when the primary node is down again, the program automatically ends and the master node automatically switches. And then we're going to manually turn on MHA again.


This is the end of the experiment

This article from "A_pan" blog, declined reprint!

MySQL high-availability solution---MHA

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.