MHA (Master high availability) is a relatively mature solution currently available in MySQL, MHA when monitoring the failure of the master node, the slave node with the most recent data is promoted to become the new Master node, during which MHA avoids the problem of consistency by obtaining additional information from the node. The MHA also provides the on-line switching capability of the master node.
The MHA service has two roles, MHA Manager (Management node) and MHA node (data node):
MHA Manager: Typically deployed separately on a separate machine to manage multiple master/slave clusters, each master/slave cluster is called a application.
MHA node: Runs on each MySQL server (master/slave/manager), which accelerates failover by monitoring scripts with the ability to parse and clean logs.
Environment
There are four nodes in this experiment environment, and their role assignments are as follows.
Manager:mha Manager
MASTER:MARIADB Master
SLAVE1:MARIADB slave
SLAVE2:MARIADB slave
Modify the name of each node, add the/etc/hosts file configuration content of each node:
172.16.1.2 manager.zrs.com Manager
172.16.1.3 master.zrs.com Master
172.16.1.4 slave1.zrs.com slave1
172.16.1.5 slave2.zrs.com Slave2
Initial master Node master configuration:
Server_id=1
Relay_log=relay-log
Log_bin=master-log
Configuration of Node Slave1:
server_id=2
Relay_log=relay-log
Log_bin=master-log
Relay_log_purge=0
Read_only=1
Configuration of Node Slave2:
Server_id=3
Relay_log=relay-log
Log_bin=master-log
Relay_log_purge=0
Read_only=1
The following master-slave replication architecture
Primary server
Authorizing from the server, and refreshing
MariaDB [(None)]> Grant replication Slave,replication Client on * * to ' repluser ' @ ' 172.16.1.4 ' identified by ' Replpass ' ;
MariaDB [(None)]> Grant replication Slave,replication Client on * * to ' repluser ' @ ' 172.16.1.5 ' identified by ' Replpass ' ;
MariaDB [(None)]> flush privileges;
From server configuration
Two slave all specify the primary server
MariaDB [(none)]> change master to master_host= ' 172.16.1.3 ', master_user= ' repluser ', master_password= ' Replpass ', Master_log_file= ' binlog.000003 ', master_log_pos=245;
Opening IO threads and SQL threads
MariaDB [(None)]> start slave io_thread;
MariaDB [(None)]> start slave sql_thread;
Authorization on all MySQL nodes
MariaDB [(None)]> GRANT all on * * to ' mhamngr ' @ ' 172.16.1.% ' identified by ' mhapass ';
Establish key-free communication
The nodes in the MHA cluster need mutual trust communication based on SSH to realize remote control and data management functions. You can generate a key pair on the manager node and set it to connect to the local host remotely, and then copy the private key file and the Authorized_keys file to all the remaining nodes.
[[email protected] ~]# ssh-keygen-t rsa-p '
Generating public/private RSA key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):. Ssh/id_rsa
Your identification have been saved in. Ssh/id_rsa.
Your public key have been saved in. ssh/id_rsa.pub.
The key fingerprint is:
80:59:23:b9:f8:ce:7e:86:66:ad:23:82:b3:d9:a8:81 [email protected]
The key ' s Randomart image is:
+--[RSA 2048]----+
| .. o |
| .= . |
| .. O.. |
| . . . |
| . S |
|. . |
| E o O |
|+=. B + |
|*+.=o= |
+-----------------+
[email protected] ~]# cat. Ssh/id_rsa.pub >>. Ssh/authorized_keys
[Email protected] ~]# chmod go=. Ssh/authorized_keys
[Email protected] ~]# scp-p ssh/id_rsa ssh/authorized_keys [email protected]:/root/.ssh/
The authenticity of host ' master (172.16.1.3) ' can ' t be established.
ECDSA key fingerprint is 65:f7:d6:d7:ae:3b:a2:dc:2b:bc:33:64:0e:47:11:b4.
Is you sure want to continue connecting (yes/no)? Yes
warning:permanently added ' master ' (ECDSA) to the list of known hosts.
[email protected] ' s password:
Id_rsa 100% 1675 1.6kb/s 00:00
Authorized_keys 100% 402 0.4kb/s 00:00
[Email protected] ~]# scp-p ssh/id_rsa ssh/authorized_keys [email protected]:/root/.ssh/
The authenticity of host ' slave1 (172.16.1.4) ' can ' t be established.
ECDSA key fingerprint is eb:b4:c4:c4:aa:15:2c:f8:6b:e8:cc:59:75:7a:a5:89.
Is you sure want to continue connecting (yes/no)? Yes
warning:permanently added ' slave1 ' (ECDSA) to the list of known hosts.
[email protected] ' s password:
Id_rsa 100% 1675 1.6kb/s 00:00
Authorized_keys 100% 402 0.4kb/s 00:00
[Email protected] ~]# scp-p ssh/id_rsa ssh/authorized_keys [email protected]:/root/.ssh/
The authenticity of host ' slave2 (172.16.1.5) ' can ' t be established.
ECDSA key fingerprint is be:2f:9f:d7:f8:2e:09:b1:7d:29:c2:76:53:0f:d2:94.
Is you sure want to continue connecting (yes/no)? Yes
warning:permanently added ' slave2,172.16.1.5 ' (ECDSA) to the list of known hosts.
[email protected] ' s password:
Id_rsa 100% 1675 1.6kb/s 00:00
Authorized_keys 100% 402 0.4kb/s 00:00
Installing MHA
In addition to the source package, MHA also provides a RPM-format package, which is https://code.google.com/p/mysql-master-ha/wiki/Downloads?tm=2. This installation is using the RPM format, which requires MHA node to be installed on all nodes of the manager and node.
Installing MHA Manager
RPM Installation Method:
[email protected] ~]# Yum install perl-dbd-mysqlperl-config-tiny perl-log-dispatch Perl-parallel-forkmanager
[Email protected] ~]# RPM-IVH mha4mysql-node-0.56-0.el6.noarch.rpm
[Email protected] ~]# RPM-IVH mha4mysql-manager-0.56-0.el6.noarch.rpm
How to install the TAR package:
TAR-ZXF mha4mysql-manager-0.56.tar.gz
CD mha4mysql-manager-0.56
Perl makefile.pl
Make
Make install
Install MHA Node
RPM Installation Method:
~]# Yum Install Perl-dbd-mysql
~]# RPM-IVH mha4mysql-node-0.56-0.el6.noarch.rpm
How to install the TAR package:
Tar-zxfmha4mysql-node-0.56.tar.gz
CD mha4mysql-node-0.56
Perl makefile.pl
Make
Make install
Initialize MHA
The Manger node needs to provide a dedicated configuration file for each monitored master/slave cluster, and all master/slave clusters can share the global configuration. The global configuration file defaults to/ETC/MASTERHA_DEFAULT.CNF, which is an optional configuration. If you monitor only a set of master/slave clusters, you can also provide default configuration information for each server directly through the application configuration. The configuration file path for each application is custom, and the experiment will use/ETC/MASTERHA/APP1.CNF
[Server default]
User=mhamngr
Password=mhapass
Manager_workdir=/data/masterha/app1
Manager_log=/data/masterha/app1/manager.log
Remote_workdir=/data/masterha/app1
Ssh_user=root
Repl_user=repluser
Repl_password=replpass
Ping_interval=1
[Server1]
hostname=172.16.1.3
Candidate_master=1
[Server2]
hostname=172.16.1.4
Candidate_master=1
[Server3]
hostname=172.16.1.5
Detect SSH mutual trust communication configuration between each node is normal
[Email protected] ~]# masterha_check_ssh--conf=/etc/masterha/app1.cnf
When you see the output information, the last line appears as follows, indicating that it passed detection.
[INFO] All SSH connection tests passed successfully.
Check that the connection configuration parameters for the managed MySQL replication cluster are healthy
[Email protected] ~]# MASTERHA_CHECK_REPL--conf=/etc/masterha/app1.cnf
When you see the output information, the last line appears as follows, indicating that it passed detection.
MySQL Replication Health is OK.
Start MHA
[Email protected] ~]# nohup masterha_manager--conf=/etc/masterha/app1.cnf >/data/masterha/app1/manager.log 2> &1 &
View the status of the master node
[Email protected] ~]# Masterha_check_status--conf=/etc/masterha/app1.cnf
App1 (pid:23265) is running (0:PING_OK), master:172.16.1.3
[Email protected] ~]#
Stop MHA
[Email protected] ~]# masterha_stop--conf=/etc/masterha/app1.cnf
Stopped App1 successfully.
MHA provides a number of tool programs, which are common as shown below.
Manager node:
-MASTERHA_CHECK_SSH:MHA relies on the SSH environment Detection tool;
-Masterha_check_repl:mysql Replication environment Detection tool;
-MASTERHA_MANAGER:MHA service main program;
-Masterha_check_status:mha Running status Detection tool;
-Masterha_master_monitor:mysql Master node usability monitoring tool;
-Masterha_master_switch:master node Switching tool;
-Masterha_conf_host: Add or remove a configured node;
-Masterha_stop: Tools to close the MHA service;
Node nodes:
-Save_binary_logs: Save and copy the binary log of master:
-Apply_diff_relay_logs: Identify the difference of the relay log event and apply it to other slave:
-Filter_mysqlbinlog: Remove unnecessary ROLLBACK events (MHA no longer use this tool):
-Purge_relay_logs: Clears the trunk log (does not block SQL threads):
Custom extensions:
-Secondary_check_script: Detection of Master availability through multiple network routes;
-Master_ip_failover_script: Update the Masterip used by application;
-Shutdown_script: Force the master node to close;
-Report_script: Send report;
-Init_conf_load_script: Load initial configuration parameters;
-Master_ip_online_change_script: Update master node IP address;
Test failover
Close the MARIADB service on the master node
[Email protected] ~]# killall-9 mysqld mysqld_safe
View Log found, 172.16.1.3 this node down, 172.16.1.4 promoted to master.
Complete Masterha-manager Restart with Zabbix
Approximate steps
Skipping the Zabbix server and agent side installation steps, I installed both the Zabbix server and the Zabbix agent on the manager host, monitoring the NOHUP-initiated manager management process just set up, Once the execution of this background command is found, the script is invoked immediately through the conditions and triggers set in Zabbix, making the manager process always running on the manager.
The configuration that needs to be done on the agent:
1.zabbix users have the administrative rights they need
Edit/etc/sudoers File
Note the following line: Because the system by default is to be able to log on through the TTY user, execute the command, Zabbix not be able to log on to the system permissions, so to put this comment
Add the following line: It's not safe to do this, and in a production environment, a safer approach
#Defaults Requiretty
Zabbix all= (All) Nopasswd:all
2.agent process to allow remote command execution
Turn on the remote command and set the configuration in the/etc/zabbix/zabbix_agentd.conf configuration file to 1.
Enableremotecommands=1
3. Turn on the service
[Email protected] ~]# systemctl start Zabbix-agent.service
4. Set Hosts,items,triggers,actions (action,conditions,operations) on the client interface,
Note that operations needs to set the commands call script to start the MHA program
[email protected] ~]# cat mannager.sh
Nohup Masterha_manager--conf=/etc/masterha/app1.cnf >/data/masterha/app1/manager.log 2>&1 &
5. You can test whether the Zabbix is based on the set transaction action, complete the script call, complete the manager's background boot
Close the process performed by Nohup
[Email protected] ~]# kill-9 +id #这个id号需要先查询
Manual fetch once:
[Email protected] ~]# zabbix_get-s 172.16.1.2-k Masterha.manager
2
Manually grabbed once again:
[Email protected] ~]# zabbix_get-s 172.16.1.2-k Masterha.manager
0
When shown here is 0, and through the PS command can see that the process has actually started, so use Zabbix complete Masterha-manager reboot is successful.
Zabbix_get is the Zabbix command that gets the value at the command line:
-S to check the IP address, local or remote can be
-P ZABBIX_AGENTD Port
-K Key value
MySQL (iii): MHA enables high availability of master servers in MySQL master-slave architecture, Zabbix completes manager reboot