remember before an article, called "Analysis of MySQL high-availability architecture", after a lot of small partners in the public backstage or other channels asked me, when there are related to the in-depth configuration management article out, so, migrant brother, will also be on the front of the various types of structure to collate, and then released. Then today's future release of the MHA architecture of the overall planning and configuration operations.
Simple Introduction
MHA (Master high availability) is currently a relatively mature solution for MySQL highly available, as a highly available software for failover and master-slave upgrade in MySQL high-availability environments. During the MySQL failover process, MHA can automatically complete the failover operation of the database within 0-30 seconds, and in the process of failover, MHA can ensure the consistency of the data to the maximum extent, in order to achieve a true sense of high availability.
MHA consists of two parts: MHA Manager (Management node) and MHA node (data node). MHA Manager can be deployed individually on a separate machine to manage multiple master-slave clusters or on a single slave node. MHA node runs on each MySQL server, MHA Manager periodically probes the master node in the cluster, and when master fails, it automatically promotes the slave of the latest data to the new master. All other slave are then re-directed to the new master. The entire failover process is completely transparent to the application.
At present MHA mainly supports a master multi-slave architecture, to build MHA, requires a replication cluster must have at least three database servers, one master two from, that is, one to act as master, one to act as the standby master, and another to act as a slave, because at least three servers are required
Deployment planning
System Information
[[email protected] ~]# cat /etc/redhat-releaseCentOS release 6.8 (Final)[[email protected] ~]# uname -r2.6.32-642.el6.x86_64
Database
mysql57-community-release-el6-9.noarch.rpm
Environmental planning
DB250 Cluster Management node 192.168.4.250
Db251 Main Library 192.168.4.251
Db253 from the library (prepare the main library) 192.168.4.253
Db254 from library 192.168.4.254
The architecture diagram is as follows
Configuring SSH-Free Login with Hosts file
Configuring the Hosts File
cat>>/etc/hosts<<EOF192.168.4.250 db250192.168.4.251 db251192.168.4.253 db253192.168.4.254 db254EOF
After the configuration is complete, check the
[[email protected] ~]# tail -4 /etc/hosts192.168.4.250 db250192.168.4.251 db251192.168.4.253 db253192.168.4.254 db254
Configure SSH Key Login
ssh-keygen -t rsassh-copy-id 192.168.4.251ssh-copy-id 192.168.4.253ssh-copy-id 192.168.4.254
Configuring all servers--------------above requires configuration----------------
Installing MySQL and configuring master-Slave synchronization
Installing the MySQL database service
rpm -ivh mysql57-community-release-el6-9.noarch.rpmyum install mysql-community-server -y/etc/init.d/mysqld start
The default password is
[[email protected] ~]# grep ‘temporary password‘ /var/log/mysqld.log2017-08-30T08:08:05.956867Z 1 [Note] A temporary password is generated for [email protected]: f;2YK51qXlo/
Modify the default password, in order to facilitate the operation of the back
mysql> alter user ' root ' @ ' localhost ' identified by ' 123456 ';
Configuring the three database servers--------------above requires configuration----------------
Configuring Master-Slave synchronization
Configuration files for both Db251 and db253 need to be configured to open Binlog
server_id=251
Log_bin=master-bin
Log_bin_index=master-bin.index
server_id=253
Log_bin=slave-bin
Log_bin_index=slave-bin.index
Log-slave-updates
Relay_log_index=slave-relay-bin.index
Relay_log=slave-relay-bin
server_id=254
Relay_log_index=slave-relay-bin.index
Relay_log=slave-relay-bin
Synchronize the Master library data to the slave library, please refer to the previous article
Check master-Slave synchronization status
Mysql-uroot-p123456-e "show slave Status\g" |egrep "slave_io_running| Slave_sql_running "
All database node configuration MHA Manage Users and permissions
Grant all on, to [e - Mail protected] ' 192.168.4.% ' identified by ' 123456 ';
This is where all the foundation environments are built
Installation Configuration MHA
https://downloads.mariadb.com/MHA/
The entire MHA service installation process is as follows:
Install dependent packages
yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch -y yum install perl-Parallel-ForkManager perl-Time-HiRes –y
Manage node Installation
rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpmyum localinstall -y mha4mysql-manager-0.52-0.noarch.rpmyum localinstall -y mha4mysql-node-0.52-0.noarch.rpm
Other node installation
Yum Localinstall-y mha4mysql-node-0.52-0.noarch.rpm
The installation process may be error, according to the prompt to solve
The configuration process is as follows:
Mkdir /mha[[email protected] ~]# vim /etc/masterha_default.cnf[server default]#刚才授权的mysql管理用戶名user=mhapassword=123456manager_workdir=/mhamanager_log=/mha/manager.logremote_workdir=/mha#ssh免密钥登录的帐号名ssh_user=root#mysql复制帐号,用来在主从机之间同步二进制日志等repl_user=slaverepl_password=123456#ping间隔,用来检测master是否正常ping_interval= 1[server1]hostname=db251master_binlog_dir=/var/lib/mysql#候选master挂掉时候优先让它顶candidate_master=1[server2]hostname=db253master_binlog_dir=/var/lib/mysql/candidate_master=1[server3]hostname=db254master_binlog_dir=/var/lib/mysql/no_master=1#不能成为master
Masterha_check_ssh checking SSH configuration status of MHA
MASTERHA_CHECK_REPL Check MySQL replication status
Masterha_manger Start MHA
Masterha_check_status Detecting current MHA running state
Masterha_master_monitor Checking Master for downtime
Masterha_master_switch control failover (automatic or manual)
Masterha_conf_host adding or removing configured server information
Validation actions
[[email protected] ~]# masterha_check_ssh --conf=/etc/masterha_default.cnf检查主从同步masterha_check_repl --conf=/etc/masterha_default.cnf
Start the MHA service
[[email protected] ~]# masterha_manager --conf=/etc/masterha_default.cnf &[1] 2035[[email protected] ~]# Fri Sep 1 01:52:25 2017 - [info] Reading default configuratoins from /etc/masterha_default.cnf..Fri Sep 1 01:52:25 2017 - [info] Reading application default configurations from /etc/masterha_default.cnf..Fri Sep 1 01:52:25 2017 - [info] Reading server configurations from /etc/masterha_default.cnf..
Viewing the boot log
Well, to this MHA high availability finally succeeded, also calculate can of Ha!
Next simulate the fault and turn the db251 off
[[email protected] ~]# /etc/init.d/mysqld stopStopping mysqld: [ OK ]
View Logs
Discover that the new main library has been switched to db253 on this server, and then look at the next master-slave synchronization situation
When the original primary server fails to repair, it is necessary to manually turn the master into a
[[email protected] ~]# grep -i "All other slaves should start" /mha/manager.logFri Sep 1 01:59:05 2017 - [info] All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST=‘db253 or 192.168.4.253‘, MASTER_PORT=3306, MASTER_LOG_FILE=‘slave-bin.000003‘, MASTER_LOG_POS=760546, MASTER_USER=‘slave‘, MASTER_PASSWORD=‘xxx‘;
So at this point I'm doing an operation on the master repository to change the repaired main library from the library
Next, write the data on the new main library to test the synchronization situation
Mysql MHA High-availability cluster architecture