MySQL Cluster MHA architecture deployment

Source: Internet
Author: User

There are two main components in MHA, one is the Manager node component. Similar to a supervisor.

The node nodes component is installed on the database node, one of which acts as master.

MHA requires an automatic switchover of the master node in the event of a primary node failure, so administrator privileges are essential. Therefore, SSH-based key authentication is required between multiple nodes.

The main configuration of the MHA is the manager.




Prepare three machines for unified installation mysql-5.7 (1922.168.6.12,192.168.6.92,192.168.6.91)



Create user

Useradd-s/sbin/nologin-m MySQL

Create Data Catalog

Mkdir-p/data/mysqlchown-r Mysql.mysql/data/mysql

Initializing the database (5.7 version note the password at initialization)

/usr/local/mysql/bin/mysqld--no-defaults--initialize--user=mysql--basedir=/usr/local/mysql/--datadir=/data/ mysql/

Configure boot-up

CP support-files/mysql.server/etc/init.d/mysqld chmod +x/etc/init.d/mysqldchkconfig--add mysqldchkconfig--list/etc /init.d/mysqld start

To modify the initial password:

/usr/local/mysql/bin/mysqladmin-uroot-p password ' 123456 '

Creating a soft connection

Ln-s/usr/local/mysql/bin/mysql/usr/bin/mysqlln-s/usr/local/mysql/bin/mysqlbinlog/usr/bin/mysqlbinlog

First step: Modify the/ETC/MY.CNF configuration file (Server-id different for three nodes)

Master Node

[mysqld]basedir=/usr/local/mysqldatadir=/data/mysqlsocket=/tmp/mysql.socklog-error=/var/log/mysql.loglog-bin=/ data/mysql/mysql-binbinlog_format=row                          Open Line Mode secure-file-priv=/tmp                         Restrict mysqld  Import | The export can only occur in the/tmp/directory server-id=12                           # gtidgtid-mode=on                            enable the Gtid type, otherwise it is a normal replication schema enforce-gtid-consistency =true                  The consistency of forced Gtid log-slave-updates=1                       slave whether the update is logged (5.6 required) Sql_mode=no_engine_substitution,strict_trans_tables

Slave node

[mysqld]server-id=13basedir=/usr/local/mysqldatadir=/data/mysqlsocket=/tmp/mysql.socklog-error=/var/log/ Mysql.loglog-bin=/data/mysql/mysql-binbinlog_format=rowsecure-file-priv=/tmp Sql_mode=NO_ENGINE_SUBSTITUTION, Strict_trans_tablesslave_parallel_workers = 16slave_parallel_type= logical_clock#gtidgtid-mode= Onenforce-gtid-consistency=truelog-slave-updates=1

Turn on semi-synchronous replication

Master Node Installation Plugin

INSTALL PLUGIN rpl_semi_sync_master SONAME ' semisync_master.so ';

#半同步复制 (Add to config file my.cnf)

rpl_semi_sync_master_enabled=1rpl_semi_sync_master_timeout=10000

Slave node Installation plugin master-slave sync on to start

INSTALL PLUGIN rpl_semi_sync_slave SONAME ' semisync_slave.so ';

#半同步复制 (Add to config file my.cnf)

Rpl_semi_sync_slave_enabled=1

Note: After modifying, restart the three-node database

Step Two: Create a replication user

Master node:

GRANT REPLICATION SLAVE On * * to [e-mail protected] '% ' identified by ' 123456 ';

To turn on replication from a library:

Change Master to master_host= ' 192.168.6.12 ', master_user= ' repl ', master_password= ' 123456 ', master_auto_position=1; Start slave;


Step three: Turn off Relaylog auto-delete

MySQL database master-slave replication by default, the relay logs from the library is automatically deleted after the SQL thread finishes executing, but for MHA scenarios, the recovery of some lag from the library relies on other relay logs from the library, so it takes the option of disabling the automatic deletion feature and periodic cleanup. For relay logs that are too large to clean, you need to be aware of the replication latency resource overhead that is caused. MHA can accomplish this task by Purge_relay_logs scripts and cooperating with Cronjob.

Set global Relay_log_purge = 0; Temporary (three nodes are recommended to do) Relay_log_purge = 0 Permanent, in the configuration file, it is recommended that all three nodes do


Fourth step: Install nodes in each node package

Install MHA node:

relies on package Perl-dbd-mysql and installs node software on all three nodes

Yum-y Install PERL-DBD-MYSQLRPM-IVH mha4mysql-node-0.56-0.el6.noarch.rpm

Create MHA administrative users in the main library

Grant all privileges on * * to [email protected] '% ' identified by ' 123456 ';

Fifth step: Deploy the Manger node (deployed from the library 192.168.6.91)

Yum install-y perl-config-tiny epel-release perl-log-dispatch perl-parallel-forkmanager PERL-TIME-HIRESRPM-IVH mha4mysql-manager-0.56-0.el6.noarch.rpm

Create a must directory

Mkdir-p/etc/mhamkdir-p/VAR/LOG/MHA/APP1----can manage multiple sets of master-slave replication

Creating a configuration file

vim /etc/mha/app1.cnf          -----"Serverdefault can be independent [ server default]                         manager_log=/var/log/mha/app1/manager.logmanager_workdir =/var/log/mha/app1master_ip_failover_script=/usr/local/bin/master_ip_failoveruser=mhapassword=123456ping_ interval=2repl_password=123456repl_user=replssh_user=root[server1]candidate_master=1check_repl_delay=0  hostname=192.168.6.12port=3306[server2]candidate_master=1                             ----"Regardless of the high priority of the host, generally in the host performance differences when using check_repl_delay=0                             ----"Regardless of the high priority of the alternative library, the data will be delayed as long asThat cut hostname=192.168.6.92port=3306[server3]hostname=192.168.6.91port=3306 

Detect mutual trust

Masterha_check_ssh--conf=/etc/mha/app1.cnf

Test if replication is normal

Masterha_check_repl--conf=/etc/mha/app1.cnf

Sixth step: MHA comes with high availability

Vim/usr/local/bin/master_ip_failovermy $vip = ' 192.168.6.166/24 '; my $key = ' 0 '; my $ssh _start_vip = "/sbin/ifconfig eth0: $key $VIP "; my $ssh _stop_vip ="/sbin/ifconfig eth0: $key down ";

On the master node.

Ifconfig eth0:0 192.168.6.166/24


Start MHA

Nohup Masterha_manager--conf=/etc/mha/app1.cnf--remove_dead_master_conf--ignore_last_failover </dev/null >/ Var/log/mha/app1/manager.log 2>&1 &

Check Status:

Masterha_check_status--conf=/etc/mha/app1.cnf




MySQL Cluster MHA architecture deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.