Online switching is also supported. Switching from the current running master to a new master takes only a short time (within 0.5-2 seconds). In this case, only write operations are blocked and read operations are not affected, easy to maintain host hardware.
2MHA architecture
MHA consists of MHA Manager and MHA Node.
1. png
3MHA Manager
Run some tools, such as the masterha_manager tool to automatically monitor the MySQL Master and implement master failover. Other tools can manually implement master failover, online mater transfer, and connection check. One Manager can manage multiple master-slave clusters.
4MHA Node
Deployed on all servers running MySQL, whether it is master or slave. There are three main functions.
I. Save binary logs
If you can access the faulty master, the binary log of the master will be copied.
II. Application differential relay logs
The differential relay log is generated from the slave with the latest data, and then the differential log is applied.
III. Clear relay logs
Delete relay logs without stopping the SQL thread
How 5MHA works
2. png
When the master node fails, compare the locations of the masterbinlog read by the I/O threads between slave nodes, and select the closest slave as the latestslave node. Other slave logs are generated by comparing with latest slave. Apply the binlog stored from the master on the latest slave and upgrade the latest slave to the master. Finally, apply the corresponding differential relay logs on other slave instances and start copying from the new master.
When MHA implements Master failover, MHA Node attempts to access the faulty master (via SSH). If the access is available (not due to hardware failure, such as damage to InnoDB data files ), binary files are saved to maximize data loss. MHA and semi-synchronous replication can greatly reduce the risk of data loss.
6. Current high availability solution
Heartbeat + DRBD
Overhead: you need to add a passive master server (not to process application traffic)
Performance: to achieve high availability in the DRBD replication environment, innodb-flush-log-at-trx-commit and sync-binlog must be set to 1, which will lead to a reduction in write performance.
Consistency: The necessary binlog time on the master node may be lost, so that slave cannot be copied, resulting in data consistency problems.
MySQL Cluster
MySQL Cluster achieves high availability, but it uses the NDB storage engine and SQL nodes have single point of failure (SPOF) issues.
Semi-synchronous replication (5.5 +)
Semi-synchronous replication greatly reduces the problem that "binlog events only fail on the master.
At the time of submission, ensure that at least one slave (not all) receives the binlog, so some slave may not receive the binlog.
Global transaction ID
To add a global transaction ID (global transaction id) to a binary file, you must change the binlog format. This format is not supported in Version 5.1/5.5.
There are many ways to apply the global transaction ID in a straight line, but the complexity, performance, data loss, or consistency issues cannot be avoided.
PXC
PXC achieves high service availability, and data synchronization involves concurrent replication. However, only the InnoDB engine is supported. All tables must have primary keys. There are many lock conflicts and deadlocks.
7 MHA advantages
1. Fast failover
In a master-slave replication cluster, as long as there is no delay in the slave database replication, MHA can usually implement failover within several seconds. If a master fault is detected within 9-10 seconds, you can disable the master in 7-10 seconds to avoid split brain. Within a few seconds, the differential relay log is applied to the new master, therefore, the total downtime is usually 10-30 seconds. After the new master is restored, MHA restores the remaining slave in parallel. Even with tens of thousands of slave instances, the master recovery time is not affected.
DeNA uses MHA in a master-slave environment with over 150 MySQL (major versions 5.0/5.1. When the mater fails, MHA completes the failover within four seconds. In traditional active/passive cluster solutions, failover is impossible within four seconds.
2. master faults will not cause data inconsistency
When the current master fails, MHA automatically identifies the differences between slave relay logs and applies them to all slave. In this way, all salve can be synchronized as long as all slave instances are alive. Used with Semi-Synchronous Replication to ensure that no data is lost.
3. You do not need to modify the current MySQL settings.
One of the important principles of MHA design is to make it as easy to use as possible. MHA works in master-slave replication environments of traditional MySQL versions 5.0 and later. Compared with other high-availability solutions, MHA does not need to change the MySQL deployment environment. MHA is suitable for asynchronous and semi-synchronous master-slave replication.
Start/Stop/upgrade/downgrade/install/uninstall MHA does not need to be changed (package expansion start/stop) MySQL replication. When you need to upgrade MHA to a new version, you don't need to stop MySQL, just replace it with the new version of MHA, and then restart MHA Manager.
MHA runs on the native version starting with MySQL 5.0. Some other MySQL high-availability solutions require specific versions (such as MySQL Cluster and MySQL with global transaction ID), but not only migrate applications for master high availability. In most cases, old MySQL applications have been deployed, and you do not want to spend too much time migrating to different storage engines or newer front-end releases to achieve high availability of the Master. MHA works on MySQL 5.0/5.1/5.5 native versions, so migration is not required.
4. No need to add a large number of servers
MHA consists of MHA Manager and MHA Node. MHA Node runs on the MySQL server that requires failover/recovery, so no additional servers are required. MHA Manager runs on a specific server, so you need to add one (two for high availability). However, MHA Manager can monitor a large number of (or even hundreds of) independent masters. Therefore, there is no need to add a large number of servers. Even running MHA Manager on a server load balancer instance is fine. In summary, MHA does not add a large number of additional services.
5. No performance degradation
MHA applies to MySQL replication that is asynchronous or semi-synchronous. When the master node is monitored, MHA only sends a ping packet every several seconds (3 seconds by default) without sending a re-query. The performance is as fast as that of native MySQL replication.
6. Applicable to any storage engine
MHA can run on the storage engine that MySQL copies and runs, and is not limited to InnoDB. MHA can be used even in a traditional MyISAM engine environment that is not easy to migrate.
# Cat/etc/redhat-release
CentOSrelease 6.6 (Final)
# Uname-rm
2.6.32-504. el6.x86 _ 64x86_64
Create an MHA user
Run on the master node in the master/slave environment
Mysql> grant all privileges on *. * to 'mha '@ '2017. 192. %' identified by '20140901 ';
Create a soft connection
If the MySQL service is not installed in yum, you must execute the following two commands on all MySQL servers, regardless of the master or Slave node.
# Ln-s/application/mysql/bin/mysqlbinlog/usr/bin/mysqlbinlog
# Ln-s/application/mysql/bin/mysql/usr/bin/mysql
Configure SSH public key authentication
When several servers perform the same operation, only one service is listed here.
###
Add unified user
Using root users in a production environment is neither secure nor standard. In addition, the environment is unified and easy to manage, so you can create a unified common user.
#####
Create a key pair
[Root @ mha-manager ~] # Ssh-keygen-t dsa
Distribute public keys to various hosts
Verify
Configure hosts
Deploy MHANode
Run the MHA Node on all the servers that run the MySQL service, whether it is the master or slave. MHA Manager requires MHA Node, so MHA Node needs to be installed on the server running MHA Manager. Of course, you can also run MHA Manager on any slave. Because the deployment steps are the same, an installation step is listed (on the mha-manager Server)
Create directory
[Root @ mha-manager ~] # Mkdir/softs
Install MHA Node
The steps for deploying other MySQL servers are the same.
Deploy MHAManager
MHA Manager only runs on the server serving as the manager. Of course, it can also be deployed on any server load balancer instance.
Install MHA Manager
Standard mha Directory
Configure app1.cnf
Configure the global configuration file
Check configuration
The operation is performed on mha-manager.
Check ssh connectivity
Check master-slave replication status
Start manager
Check the manager status
Master failover test
View on mysql-slave02