Super deployment tutorial of MySQL High Availability failover solution MHA,

Source: Internet
Author: User

Super deployment tutorial of MySQL High Availability failover solution MHA,

MHA Introduction
MHA is a MySQL failover solution written by MySQL daiu in Japan using Perl to ensure high availability of the database system. failover and MHA deployment can be completed within 10-30 seconds of downtime to avoid master-slave consistency and save costs for purchasing new servers without affecting server performance, easy to install without changing the existing deployment.
 
Online switching is also supported. switching from the current running master to a new master takes only a short time (within 0.5-2 seconds). In this case, only write operations are blocked and read operations are not affected, easy to maintain host hardware.
 
In systems with high availability and data consistency requirements, MHA provides useful functions to meet maintenance needs almost continuously.
 
Advantages:
 
1. Automatic master monitoring and Failover
 
In an existing master-slave replication environment, MHA can monitor master host faults and automatically failover.
 
Even if some slave instances do not accept the new relay log events, MHA automatically identifies the different relay log events from the latest slave and applies the different events to other slaves. Therefore, all slave instances are consistent. MHA second-level Failover (from 9 to 12 seconds to monitor host failure, choose 7 seconds to shut down the power host to avoid split-brain, then apply the difference relay logs, register to the new master, it usually takes 10-30 seconds, that is, total downtime ). In addition, you can configure an slave in the configuration file to give priority to master. Because MHA fixes the consistency between slave, DBAs do not have to handle the consistency problem.
 
After the new master is migrated, other slave instances are restored in parallel. Even if there are thousands of slave, it will not affect the master recovery time, And the slave will be completed quickly.
 
DeNA uses MHA in more than 150 master-slave environments. When one of the master nodes crashes and MHA4 seconds completes failover, this is not possible for the active/passive cluster solution.
 
2. Interactive (manual) master failover
 
MHA can be used only for failover without monitoring the master. MHA serves only as failover interaction.
 
3. non-interactive failover
 
Non-interactive failover is also available (master is not monitored, automatic failover ). This feature is very useful, especially when you have installed other software to monitor the master. For example, the Pacemaker (Heartbeat) is used to monitor master faults and vip take over, and MHA failover and slave improvement are used.
 
4. Switch the master to different hosts online
 
In many cases, it is necessary to move the master node to another host (such as replacing the raid Controller and upgrading the hardware of the master node ). This is not a master crash, but the planned maintenance must be done. Scheduled maintenance leads to downtime and must be restored as quickly as possible. Quick master switching and elegant blocking write operations are required. MHA provides this method. The elegant master switch blocks write operations within 0.5-2 seconds. In many cases, the downtime of 0.5-2 seconds is acceptable, and even if it is not in the scheduled maintenance window. This means that when you need to change the machine faster and upgrade the version later, the dba can easily take action.
 
5. master-slave data inconsistency will not be caused by the master-slave crash.
 
After the master crash, MHA automatically identifies the differences between slave relay logevents, and then applications and different slave, and eventually all slave are synchronized. In combination with semi-sync, there is almost no data loss.
 
Other high-availability solutions
 
6. MHA deployment does not affect the current environment settings
 
One of the most important design concepts of MHA is to make it as simple as possible. When using a master-slave environment of more than 5.0, other HA solutions need to change the mysql deployment settings. MHA does not allow dba to perform these deployment configurations, and both the synchronization and semi-synchronization environments can be used. Start/stop/upgrade/downgrade/install/uninstall MHA do not need to change the mysql Master/Slave (such as start/stop ).
 
When you need to upgrade MHA to a new version, you don't need to stop mysql, just update the HMA version, and then restart MHAmanger.
 
MHA supports versions 5.0, 5/1, and 5.5 (5.6 should also be supported. MHA developers did not update the version 5.6 when translating documents ). Some HA solutions require specific mysql versions (such as mysqlcluster, mysql with global transaction id), and you may not want to migrate applications just for MasterHA. In many cases, the company has deployed many traditional mysql applications, developers or DBAs do not want to spend too much time migrating data to different storage engines or new features (newer bleeding edge distributions does not know if this translation should be done ).
 
7. No additional server fees
 
MHA includes MHA Manager and MHA node. MHA node runs on each mysql server. The Manager can deploy one machine separately to monitor more than 100 master nodes. The total number of servers will not increase significantly. It should be noted that the Manager can also run on a machine in slaves.
 
8. No impact on performance
 
When the master node is monitored, MHA sends the ping packet only several seconds (3 seconds by default) without sending large queries. Master-slave replication performance is not affected
 
9. Any storage engine
 
Mysql is not only applicable to the innodb Engine for transaction security, but also for master-slave engines and MHA. That is, the mysiam engine of the legacy environment can be used without migration or MHA.


Installation and configuration methods
1. Introduction to the sample Environment

  • Centos 6.4x64
  • Mysql-5.6.16-linux-glibc2.5-x86_64.tar.gz
  • Mha4mysql-manager-0.54.tar.gz
  • Mha4mysql-node-0.54.tar.gz

2. Server description

  • Masnager 192.168.216.50
  • Master 192.168.216.51
  • Slave 192.168.216.52

Install mysql on all three servers and use manager as the management node.

3. Configuration equivalence

Manager:

          ssh-keyger -t rsa          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.50          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.51          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.52          ssh 192.168.216.50 date          ssh 192.168.216.51 date          ssh 192.168.216.52 date

Master:

          ssh-keyger -t rsa          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.50          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.51          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.52          ssh 192.168.216.50 date          ssh 192.168.216.51 date          ssh 192.168.216.52 date

Slave:

          ssh-keyger -t rsa          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.50          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.51          ssh-coyp-id -i /root/.ssh/id_rsa.pub root@192.168.216.52          ssh 192.168.216.50 date          ssh 192.168.216.51 date          ssh 192.168.216.52 date

4. Install mysql

    groupadd mysql     useradd -g mysql -s /bin/nologin -M mysql     mkdir -pv /data/mysql     tar -zxvf mysql-5.6.16-linux-glibc2.5-x86_64.tar.gz     mv mysql-5.6.16-linux-glibc2.5-x86_64 /usr/local/mysql     cd /usr/local/mysql/script     ./ mysql_install_db –user=mysql –basedir=/usr/local/mysql –datadair=/data/mysql

Create a mysql configuration file

     vim /usr/local/mysql/my.cnf

 

[Mysql] # CLIENT # port = 3306 socket =/tmp/mysql. sock [mysqld] # GENERAL # user = mysql default-storage-engine = InnoDB socket =/tmp/mysql. sock pid-file =/data/mysql. pid # MyISAM # key-buffer-size = 32 M myisam-recover = FORCE, BACKUP # SAFETY # max-allowed-packet = 16 M max-connect-errors = 1000000 # data storage # datadir =/data/mysql/# binary logging # server_id = 1 # different servers different log-bin =/data/mysql-bin expire-logs-days = 14 sync-binlog = 1 # caches and limits # tmp-table-size = 32 M max- heap-table-size = 32 M query-cache-type = 0 query-cache-size = 0 max-connections = 500 thread-cache-size = 50 open-files-limit = 65535 table-definition-cache = 1024 table-open-cache = 2048 # INNODB # innodb-flush-method = O_DIRECT innodb-log-files-in-group = 2 innodb-log- file-size = 64 M innodb-flush-log-at-trx-commit = 1 innodb-file-per-table = 1 innodb-buffer-pool-size = 592 M # LOGGING # log-error =/data/mysql/mysql-error.log log-queries-not-using-indexes = 1 slow-query-log = 1 slow-query-log-file =/data/ mysql-slow.log/mysql

5. Configure permissions

Manager:

          grant all privileges on *.* to root@'127.0.0.1' identified by 'root';          grant all privileges on *.* to root@'localhost' identified by 'root';          grant all privileges on *.* to root@'192.168.216.50' identified by 'root';          grant replication slave on *.* to slave@'192.168.216.50' identified by 'slave';          grant replication slave on *.* to slave@'192.168.216.51' identified by 'slave';          grant replication slave on *.* to slave@'192.168.216.52' identified by 'slave';

Master:

          grant all privileges on *.* to root@'127.0.0.1' identified by 'root';          grant all privileges on *.* to root@'localhost' identified by 'root';          grant all privileges on *.* to root@'192.168.216.50' identified by 'root';          grant replication slave on *.* to slave@'192.168.216.50' identified by 'slave';          grant replication slave on *.* to slave@'192.168.216.51' identified by 'slave';          grant replication slave on *.* to slave@'192.168.216.52' identified by 'slave';

Slave:

          grant all privileges on *.* to root@'127.0.0.1' identified by 'root';          grant all privileges on *.* to root@'localhost' identified by 'root';          grant all privileges on *.* to root@'192.168.216.50' identified by 'root';          grant all privileges on *.* to root@'192.168.216.51' identified by 'root';          grant replication slave on *.* to slave@'192.168.216.50' identified by 'slave';          grant replication slave on *.* to slave@'192.168.216.51' identified by 'slave';          grant replication slave on *.* to slave@'192.168.216.52' identified by 'slave';

6. Install the dependency package

To Manage Server manager, you must install all of the following

perl-Config-Tinyperl-Params-Validateperl-Parallel-ForkManagerperl-Log-Dispatch|- perl-MIME-Lite-3.027-2.el6.noarch.rpm     |- perl-MIME-Types-1.28-2.el6.noarch.rpm     |- perl-Email-Date-Format-1.002-5.el6.noarch.rpm     |- perl-MailTools-2.04-4.el6.noarch.rpm          |- perl-TimeDate-1.16-11.1.el6.noarch.rpm          |- perl-Data-ShowTable-3.3-3.4.noarch.rpm|- perl-Mail-Sender-0.8.22-21.1.noarch.rpm     |- perl-IO-Socket-SSL-1.31-2.el6.noarch.rpm          |- perl-Net-LibIDN-0.12-3.el6.x86_64.rpm          |- perl-Net-SSLeay-1.35-9.el6.x86_64.rpm     |-perl-Win32API-Registry|- perl-Mail-Sendmail-0.79_16-4.2.noarch.rpm

 

     rpm -ivh ncftp-debuginfo-3.2.3-1.3.x86_64.rpm     rpm -ivh perl-Parallel-ForkManager-0.7.5-2.2.el6.rf.noarch.rpm     rpm -ivh perl-Params-Validate-0.91-2.4.x86_64.rpm     rpm -ivh perl-Config-Tiny-2.12-7.1.el6.noarch.rpm     rpm -ivh perl-MIME-Types-1.28-2.el6.noarch.rpm     rpm -ivh perl-Email-Date-Format-1.002-5.el6.noarch.rpm     rpm -ivh perl-TimeDate-1.16-11.1.el6.noarch.rpm     rpm -ivh perl-Data-ShowTable-3.3-3.4.noarch.rpm     rpm -ivh perl-MailTools-2.04-4.el6.noarch.rpm     rpm -ivh perl-MIME-Lite-3.027-2.el6.noarch.rpm     rpm -ivh perl-Net-LibIDN-0.12-3.el6.x86_64.rpm     rpm -ivh perl-Net-SSLeay-1.35-9.el6.x86_64.rpm     rpm -ivh perl-IO-Socket-SSL-1.31-2.el6.noarch.rpm     rpm -ivh perl-Mail-Sendmail-0.79_16-4.2.noarch.rpm     rpm -ivh perl-Mail-Sender-0.8.22-21.1.noarch.rpm     rpm -ivh perl-Log-Dispatch-2.22-7.3.noarch.rpm

 

If the security perl-DBD-MySQL-4.013-3.el6.x86_64.rpm reports an error while checking rep, you need to manually compile the installation

     tar -zxvf DBD-mysql-4.027.tar.gz     cd DBD-mysql-4.0.27     perl Makefile.PL     make && make install

Node Server Installation

     rpm -ivh ncftp-debuginfo-3.2.3-1.3.x86_64.rpm     rpm -ivh perl-DBD-MySQL-4.013-3.el6.x86_64.rpm

7. Create a soft connection

     ln -s /usr/local/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog     ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql

Export the mysql database search path

     vim /etc/ld.so.conf.d/mysql-x86_64.conf       /usr/local/mysql/lib     ldconfig

8、install mha4mysql-node-0.54.tar.gz on a server

     tar -zxvf mha4mysql-node-0.54.tar.gz     cd mha4mysql-node-0.54     perl Makefile.PL     make && make install

9. Install mha4mysql-manager-0.54.tar.gz on Management Services

     tar -zxvf mha4mysql-manager-0.54.tar.gz     cd mha4mysql-manager-0.54     perl Makefile.PL     make && make install     mkdir -pv /etc/masterha     mkdir -pv /masterha/app1     cp samples/conf/* /etc/masterha     cp samples/scripts/* /usr/local/bin

 

     vim /etc/masterha/app1.cnf
     [server default]     manager_workdir=/masterha/app1     manager_log=/masterha/app1/manager.log      user=root     password=root      ssh_user=root     repl_user=slave     repl_password=slave     shutdown_script=""     #master_ip_failover_script="/usr/local/bin/masterha_ip_failover"     master_ip_online_change_script="/usr/local/bin/masterha_ip_failover "     report_script=""      [server1]     hostname=192.168.216.50     master_binlog_dir="/data/mysql/"     candidate_master=1      [server2]     hostname=192.168.216.51     master_binlog_dir="/data/mysql/"     candidate_master=1      [server3]     hostname=192.168.216.52     master_binlog_dir="/data/mysql/"     candidate_master=1

10. Test the ssh connection

     masterha_check_ssh –conf=/etc/masterha/app1.cnf

11. Test replication

    masterha_check_repl –conf=/etc/masterha/app1.cnf

12. Start the management node process.

 

   masterha_manager –conf=/etc/masterha/app1.cnf

13. Test failover

Shut down the master database mysql and check whether the slave database status switches the synchronized ip address to the new master database.

14. Set the ip address for failover

     vim /etc/masterha/app1.cnf
          master_ip_failover_script="/usr/local/bin/masterha_ip_failover "          master_ip_online_change_script="/usr/local/bin/masterha_ip_failover "

Edit the Failover script and set the vip address to 192.168.216.100.

     vim /usr/local/bin/masterha_ip_failover
#!/usr/bin/env perluse strict;use warnings FATAL => 'all'; use Getopt::Long; my (  $command,     $ssh_user,    $orig_master_host, $orig_master_ip,  $orig_master_port, $new_master_host, $new_master_ip,  $new_master_port); # my $vip = '172.16.21.119/24'; # Virtual IPmy $vip = '192.168.216.100/24'; # Virtual IPmy $key = "1";my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down"; GetOptions(  'command=s'     => \$command,  'ssh_user=s'     => \$ssh_user,  'orig_master_host=s' => \$orig_master_host,  'orig_master_ip=s'  => \$orig_master_ip,  'orig_master_port=i' => \$orig_master_port,  'new_master_host=s' => \$new_master_host,  'new_master_ip=s'  => \$new_master_ip,  'new_master_port=i' => \$new_master_port,); exit &main(); sub main {   print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";   if ( $command eq "stop" || $command eq "stopssh" ) {     # $orig_master_host, $orig_master_ip, $orig_master_port are passed.    # If you manage master ip address at global catalog database,    # invalidate orig_master_ip here.    my $exit_code = 1;    eval {      print "Disabling the VIP on old master: $orig_master_host \n";      &stop_vip();      $exit_code = 0;    };    if ($@) {      warn "Got Error: $@\n";      exit $exit_code;    }    exit $exit_code;  }  elsif ( $command eq "start" ) {     # all arguments are passed.    # If you manage master ip address at global catalog database,    # activate new_master_ip here.    # You can also grant write access (create user, set read_only=0, etc) here.    my $exit_code = 10;    eval {      print "Enabling the VIP – $vip on the new master – $new_master_host \n";      &start_vip();      $exit_code = 0;    };    if ($@) {      warn $@;      exit $exit_code;    }    exit $exit_code;  }  elsif ( $command eq "status" ) {    print "Checking the Status of the script.. OK \n";    `ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;    exit 0;  }  else {    &usage();    exit 1;  }} # A simple system call that enable the VIP on the new mastersub start_vip() {  `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;}# A simple system call that disable the VIP on the old_mastersub stop_vip() {  `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;} sub usage {  print  "Usage: master_ip_failover –command=start|stop|stopssh|status –orig_master_host=host –orig_master_ip=ip –orig_master_port=port –new_master_host=host –new_master_ip=ip –new_master_port=port\n";}

 

Test script

/usr/local/bin/masterha_ip_failover –command=status –ssh_user=root –orig_master_host=192.168.216.51 –orig_master_ip=192.168.216.51 –orig_master_port=3306

The virtual ip address must be manually started and will be automatically transferred after the master database fails.

/usr/local/bin/masterha_ip_failover –command=start –ssh_user=root –orig_master_host=192.168.216.51 –orig_master_ip=192.168.216.51 –orig_master_port=3306 –new_master_host=192.168.216.51

Test ip failover:

Shut down the master database mysql and check whether the vip is transferred to the new master database.

Articles you may be interested in:
  • MHA implements manual switching of mysql Master/Slave Databases

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.