MySQL enables highly available MHA

Source: Internet
Author: User
Tags failover log log

First, prepare the experiment MYSQL Replication Environment:

MHA has special requirements for MYSQL replication environment, such as each node to open the binary log and the relay log, each slave node must display the Enable its read-only property, and close the Relay_log_purge functions and so on, here the configuration to do a prior explanation.

There are four nodes in this experiment environment, and their roles are assigned as follows:

CENTOS7.3:MARIADB Master

CENTOS7.4:MARIADB slave

CENTOS7.5:MARIADB slave

Centos7.6:mha Manager


Second, the experimental configuration

1 , the initial master node centos7.3 configuration:

[Mysqld]

Server-id = 1

Log-bin = Master-log

Relay-log = Relay-log

Skip_name_resolve = On

2. all centos7.5 and 7.4 nodes depend on the configuration:

[Mysqld]

Server-id = 2 # The IDs of each node in the replication cluster must be unique;

Relay-log = Relay-log

Log-bin = Master-log

Read_Only = On

Relay_log_purge = 0 # Whether automatic emptying no longer requires a relay log

Skip_name_resolve = On

3 , according to the above requirements to configure the master-slave node, according to the configuration of the MYSQL replication configuration schema to complete its configuration and start the master node and the slave node, and for each slave node to start its IO and SQL threads to ensure that master-slave replication runs correctly. The operation is as follows:

on the master node:

MariaDB [(none)]>grant replication Slave on * * to ' [e-mail protected]%.% ' identified by ' magedu ';

MariaDB [(None)]> flush privileges;

MariaDB [(None)]>show master status;

On each slave node:

[[email protected] ~]# MySQL

MariaDB [(None)]>change Master to master_host= '172.16.252.18; master_user= 'repluser ', master_password= 'replpass ', master_log_file= 'master-log.000003 master_log_pos=498;

MariaDB [(None)]> start slave;

MariaDB [(none)]> show Slave status\g;

Third, Installation Configuration MHA

1 . Users with administrative privileges on all MYSQL nodes can have remote access on other nodes in the local network. Of course, this is only necessary and can only be run on the master node similar to the following SQL statement.

Mysql> GRANT All on * * to ' mhaadmin ' @ ' 172.17.%.% ' identified by ' mhapass ';

2 , ready to Trust the communication environment based on SSH:

The nodes in the MHA cluster need mutual trust communication based on SSH to realize remote control and data management functions. For simplicity, you can generate a key pair on the Manager node and set it to remotely connect to the local host and copy the private key file and the Authorized_keys file to all remaining nodes.

the following actions operate on the NODE4 :Manager node:

[[email protected] ~]# ssh-keygen-t RSA

[[Email protected] ~] #ssh-copy-id-i ssh/id_rsa.pub [email protected]

Note: The master-slave database should be implemented without password login, MHA Manager to be able to log in another three.

3 . Install the MHA installation package

Manager node: yum Install mha4mysql-manager-0.56-0.el6.noarch.rpm

all nodes, including Manager: #yum Install mha4mysql-node-0.56-0.el6.norch.rpm

4 , initialize the MHA , configure

The Manager node needs to provide a dedicated configuration file for each monitored master/slave cluster, and all Master/slave clusters can share the global configuration. The global configuration file defaults to /etc/masterha_default.cnf , which is an optional configuration. If you monitor only a set of Master/slave clusters, you can also provide default configuration information for each server directly through the application configuration. The configuration file path for each application is custom.

5 . define the MHA management configuration file

to be MHA dedicated to create an administrative user, convenient for later use, on the main node of MySQL, three nodes automatically synchronized

Mkdir/etc/mha_master

Vim/etc/mha_master/app1.cnf

The contents of the configuration file are as follows;

[Server default]// configuration for server1,2,3 Server

User=mhaadmin//MHA Managing users

Password=mhapass//mha Admin password

Manager_workdir=/etc/mha_master/app1//mha_master 's own work path

Manager_log=/etc/mha_master/manager.log//Mha_master own log files

REMOTE_WORKDIR=/MYDATA/MHA_MASTER/APP1// where is the working directory for each remote host

Ssh_user=root//SSH- based key authentication

repl_user=slave// Database user name

repl_password=magedu// Database Password

Ping_interval=1//Ping interval length

[Server1]// node 1

hostname=172.16.5.102// node 1 host address

SSH_PORT=22// node 1 ssh Port

Candidate_master=1//Can be a Master candidate node / Master node in the future

[Server2]

hostname=172.16.5.173

Ssh_port=22

Candidate_master=1

[Server3]

hostname=172.16.5.174

Ssh_port=22

Candidate_master=1

6 , detect the SSH mutual trust communication configuration between the nodes is Ok:

[Email protected] ~]# MASTERHA_CHECK_SSH-CONF=/ETC/MHA_MASTER/APP1.CNF

The last line of output information is similar to the following information, indicating that it passed detection.

[INFO] All SSH connection tests passed successfully.

Check the management of the connection configuration parameters for the MySQL replication cluster are OK :

[[Email protected] ~] #masterha_check_repl-conf=/etc/mha_master/app1.cnf

If you get an error when testing , there may be no account from the node, because this schema, any one from the node, will be possible to become the master node, so also need to create an account.

So, just do the following again on the Mater node:

MariaDB [(None)]>grant REPLICATION slave,replication CLIENT on * * to ' [e-mail protected]%.% ' identified by ' magedu ';

MariaDB [(None)]> FLUSH privileges;

run again on the Manager node and the Ok is displayed .

Four, start MHA

[[Email protected] ~] #nohup masterha_manager-conf=/etc/mha_master/app1.cnf &>/etc/mha_master/manager.log &

# After successful startup, use the following command to view the status of the master node :

Masterha_check_status-conf=/etc/mha_master/app1.cnf

App1 (pid:4978) is running (0:PING_OK), master:172.16.252.18

in the above information, "App1 (pid:4978) is running (0:PING_OK)" indicates that the MHA service is running OK , otherwise, it will be displayed as a similar " App1 is stopped (1:not_running). "

If you want to stop MHA , you need to use the master_stop command.

Masterha_stop-conf=/etc/mha_master/app1.cnf

v. Testing MHA test Failover

(1) shutting down the mariadb service at master node, simulating the master node data crash

Killall-9 mysqld Mysqld_safe

rm-rf/var/lib/mysql/*

(2) View the logs on the Manager node:

The following information appears in the/data/masterha/app1/manager.log log file, indicating that the manager detected a 172.16.252.18 node failure and then automatically failed over the 172.16.252.17 promoted to the primary node. Note that when the failover is complete,the manager will stop automatically, and the Masterha_check_status command detects that it will encounter an error prompt, as follows:

Masterha_check_status–conf=/etc/masterha/app1.cnf

App1 is stopped (2:not_rinning).

VI. Testing MHA test Failover

(3) provide a new slave node to repair the replication cluster

After the original master node fails, a new MySQL node needs to be prepared again . after recovering the data from a backup from the master node, configure it as the new Master slave node. Note that if the newly added node is a new node, its IP address is to be configured as the IP of the original master node , Otherwise it will need to modify the corresponding app1.cnf in the IP address. Then start the manager again and check its status again.

(4) perform check operation again after new node is provided

Masterha_check_status-conf=/etc/mha_master/app1.cnf

Masterha_check_repl-conf=/etc/mha_master/app1.cnf

Check it out, run it again, this time to log

MASTERHA_MANAGER-CONF=/ETC/MHA_MASTER/APP1.CNF >/etc/mha_master/manager.log 2>&1

Seven, new node on-line, failover recovery considerations

(1) , in the production environment, when your primary node is hung, be sure to make a backup from the node, take the backup file to manually promote the master node to the slave node, and indicate from which log file location to start replication.

(2) , each time the automatic conversion, every time the (replication health) detection is not OK always boot must manually repair the master node, unless you change the configuration file

(3) , manually repair the primary node after promotion to slave node, run the instrumentation command again

Masterha_check_repl--conf=/etc/mha_master/app1.cnf

App1 (pid:3211) is running (0:PING_OK), master:172.16.5.103

(4) , run again to restore success

Masterha_manager--conf=/etc/mha_master/app1.cnf


MySQL enables highly available MHA

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.