A comprehensive analysis of MySQL's MHA and Mycat

Source: Internet
Author: User
Tags time interval

First, Introduction

MHA:

You can think of it as a tool for monitoring MySQL, when Master hangs up, a slave as master, and another slave as the new Master's repository;

So the MHA architecture is three databases, and has been done in advance master-slave mode (a master two from), MHA can manage multiple sets of MySQL master-slave cluster; VIP jump

is achieved through keepalived, the overall architectural design as shown (with the help of images on the Web):

Mycat:

To realize the read-write separation, sub-database table of an open-source tool, I do not use the sub-database sub-table function, but simply do read and write separation; Mycat implementation of read-write separation is in the configuration file

, the configuration is also relatively simple, the following will be described in detail, the architecture is adopted as shown in the schema pattern:

Ii. MHA Erection and installation

2.1 Build ( a master two from already realized, here not to do elaboration )

The package is divided into two parts, one is the manager's package, the other is the node packet; The package is: Https://pan.baidu.com/s/1D6v6yPeCTecaB68LwPZJ2A, Password: oyez

All nodes need to install the node package, and then all nodes you need to install the Perl dependency package: Perl-dbd-mysql

If you still lack other packages, then see the Recruit!!

One more thing to note: The manager package can be deployed on a single server or on one of the node nodes

All servers establish key logins, mutual trust

2.1.1 Manager node
Mkdir-p/ETC/MASTERHA && CP mha4mysql-manager-0.53/samples/conf/app1.cnf/etc/masterha/

1 [Server default]2 Manager_workdir=/var/log/masterha/app1.log//Set Manager's working directory3 Manager_log=/var/log/masterha/app1/manager.log//Set manager's log4 master_binlog_dir=/data/mysql//Set Master to save the Binlog location so MHA can find the master log, and here is the MySQL data directory5 master_ip_failover_script=/usr/local/bin/master_ip_failover//Set the switch script for automatic failover6 master_ip_online_change_script=/usr/local/bin/master_ip_online_change//Set switch script for manual switching7 password=123456//Set the password for the root user in MySQL, which is the password to create the monitoring user in the previous article8 user=root setting monitoring user root9 ping_interval=1//Set Monitoring main library, send ping packet time interval, default is 3 seconds, try three times without response automatically when RailoverTen remote_workdir=/tmp//Set the location where the remote MySQL binlog is saved when the switch occurs One repl_password=123456//Set the password of the replication user A REPL_USER=REPL//Setting the replication user name in the replication environment - Report_script=/usr/local/send_report//Set up a script to send alerts after a switchover occurs - secondary_check_script=/usr/local/bin/masterha_secondary_check-s server03-s server02 the shutdown_script= ""//set to shut down the failed host script after a failure occurs (the main function of the script is to shut down the host in the event of a brain fissure, not used here) - ssh_user=root//Set SSH login user name -  - [Server1] + hostname=192.168.0.50 - port=3306 +  A [Server2] at hostname=192.168.0.60 - port=3306 - candidate_master=1//Set to candidate master, if this parameter is set, this will be promoted from the library to the primary library after the master-slave switchover occurs, even if the main library is not the latest slave of events in the cluster - check_repl_delay=0//By default if a slave falls behind the master 100M relay logs, MHA will not select that slave as a new master, Because the recovery of this slave takes a long time, by setting the CHECK_REPL_DELAY=0,MHA trigger switch, the replication delay will be ignored when selecting a new master, this parameter is for setting the candidate_master= The 1 host is very useful because the candidate master must be the new master during the switchover. -  - [Server3] in hostname=192.168.0.70 -port=3306
app1.conf

      

    Attention:

MHA in the process of switching, from the library during the recovery process depends on the relay log information, so here to relay log automatic cleanup set to off, with manual removal relay log. By default, the relay log from the server is automatically deleted when the SQL thread finishes executing. However, in a MHA environment, these relay logs may be used when restoring other slave servers, so you need to disable the automatic deletion of the trunk log. Periodic cleanup of the relay log requires consideration of replication latency issues. In ext3 file system, deleting large files takes a certain amount of time and can cause serious replication delays. To avoid replication delays, you need to temporarily create a hard link for the trunk log because it is fast to delete large files on a Linux system with hard links. (in MySQL database, when you delete a large table, you usually use a hard-link method)

The MHA node contains the pure_relay_logs command tool, which creates a hard link for the trunk log, executes the set global relay_log_purge=1, waits a few seconds for the SQL thread to switch to the new trunk log, and then executes the set global Relay_log_purge=0

   Check the SSH communication between the nodes:

Masterha_check_ssh--conf=/etc/masterha/app1.cnf; Show all success, just be successful, or check the cause of the error; I have two nodes here are always detected between the pass, the manual test connected to each other is no problem, But it's not through this script, my solution is to delete the. SSH directory, regenerate the public key and secret key, re-establish trust

    Then check the replication situation:

      Masterha_check_repl--conf=/etc/masterha/app1.cnf

Before executing this script, you need to configure the keepalived first, because master_ip_failover this script will go to look for keepalived VIP, if not configured good keepalived, first put Master_ip_failover _script=/usr/local/bin/master_ip_failover This line to comment out (app1.conf file)

 2.2 MHA introduced keepalived (the MySQL service process hangs up by MHA stop keepalived)

To introduce the Keepalived service into MHA, we only need to modify the script file Master_ip_failover which is triggered by the switch, and add the processing of keepalived to the master when it is down.

#!/usr/bin/env perluse strict;use Warnings FATAL = ' all '; use Getopt::long;my ($command, $ssh _user, $orig _master_host, $orig _master_ip, $orig _master_port, $new _master_host, $new _master_ip, $new _master_port); my $ VIP = ' 192.168.0.88 '; my $ssh _start_vip = "/etc/init.d/keepalived start"; my $ssh _stop_vip = "/etc/init.d/keepalived Stop" ; GetOptions (' command=s ' + \ $command, ' ssh_user=s ' + \ $ssh _user, ' orig_master_host=s ' =&G T \ $orig _master_host, ' orig_master_ip=s ' + \ $orig _master_ip, ' orig_master_port=i ' + \ $orig _master_port, ' New_master_host=s ' + \ $new _master_host, ' new_master_ip=s ' + \ $new _master_ip, ' new_master_port=i ' = \ $new _master_port,); Exit&mainSub Main {print "\n\nin SCRIPT test==== $ssh _stop_vip== $ssh _start_vip===\n\n";        if ($command eq "Stop" | | $command eq "STOPSSH") {my $exit _code = 1;            eval {print "Disabling the VIP on Old master: $orig _master_host \ n"; &STOP_VIP();        $exit _code = 0;        };            if ([email protected]) {warn "Got Error: [Email protected]\n";        Exit $exit _code;    } exit $exit _code;        } elsif ($command eq "Start") {my $exit _code = 10;            eval {print "Enabling the vip-$vip on the new master-$new _master_host \ n"; &START_VIP();        $exit _code = 0;        };            if ([email protected]) {warn [email protected];        Exit $exit _code;    } exit $exit _code; } elsif ($command eq "status") {print "Checking the status of the script.        OK \ n ";        # ' ssh $ssh _user\ @cluster1 \ "$ssh _start_vip \";    Exit 0; } else {&usage();    Exit 1; }}# A Simple system call then enable the VIP on the new Mastersub Start_vip () {' ssh $ssh _user\@ $new _master_host \ ' $ss H_START_VIP \ "';}    # A Simple system Call this disable the VIP on the Old_mastersub Stop_vip () {return 0 unless ($ssh _user); ' SSH $ssh _user\@ $orig _master_host \ "$ssh _stop_vip \";} Sub usage {print "Usage:master_ip_failover--command=start|stop|stopssh|status--orig_master_host=host--orig_mast Er_ip=ip--orig_master_port=port--new_master_host=host--new_master_ip=ip--new_master_port=port\n ";}
Master_ip_failover

Because I do not understand Perl, this configuration syntax is also from the Internet, no test to make it bad; I use Zabbix monitoring, Trigger script implementation Mysql-master exception, kill keepalived, so that VIP jumps to the new master

2.3 Summary:

At present, the high-availability scheme can realize the high availability of the database, such as mmm,heartbeat+drbd,cluster, to some extent. There are Percona Galera cluster and so on. Each of these highly available software has advantages and disadvantages. When choosing a high-availability scenario, the main focus is on business and data consistency requirements. Finally, the MHA architecture is recommended for high database availability and data consistency requirements.

Iii. Construction of Mycat

3.1 Construction

Mycat is easy to build, directly decompression out on it, the main is to look at the configuration file configuration, mainly Server.xml and Schema.xml

    Server.xml: (primarily the user name and password for configuring Mycat, and the libraries that can be managed)

    

    schema.xml: (Configure read-write separation)

3.2 Set-Group construction

Two or more Mycat server configuration is the same, there is no direct contact, the introduction of the diagram illustrates the already very clear, is through the Keepalived+nginx to achieve proxy forwarding to mycat, the implementation of high availability, here do not do too much elaboration

Iv. Additional Items

Presumably some students will ask, why do not use four servers, two master each for the main preparation, the middle through the keepalived to achieve VIP jump, two slave are change master to VIP, such words, can also achieve high availability, and do not need third-party tools to monitor the jump

Disadvantages:

1, such as your company visit is very large, the application layer has been for different business modules grouped, then the database this block also has to be grouped, if divided into three groups, MHA scheme, the use of up to 10 servers, and the following scenario, requires 12 servers

2, MySQL Master Mutual master from the words, the performance of the server test is relatively large, but also prone to various problems, there is a little data synchronization, slave can not get complete data

Advantages:

1. Reliance on third-party tools is not required

2, the study cost is also relatively low

V. Summary

To sum up, we recommend that you build MHA monitoring, to achieve the purpose of downtime jump ( here say that the role of the relay log is used to restore the use of slave data )

A comprehensive analysis of MySQL's MHA and Mycat

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.