MYSQL + MHA + keepalive + VIP installation configuration (2) -- MHA configuration 1. Overview
1. MHA introduction
MHA (Master High Availability) is a software package for automatic master failover and Slave upgrade. it is based on standard MySQL replication (asynchronous/semi-synchronous ).
MHA consists of two parts: MHA Manager (management Node) and MHA Node (data Node ).
MHA Manager can be separately deployed on an independent machine to manage multiple master-slave clusters, or deployed on one slave. MHA Manager detects the node of the cluster. when the master node fails, it can automatically upgrade the slave with the latest data to the new master, then direct all of its slave to the new master. the entire failover process is transparent to applications.
MHA node runs on each MySQL server (master/slave/manager). it accelerates failover by monitoring scripts with the logs parsing and cleanup functions.
2. how MHA works
-Binlog events ).
-Identify the slave with the latest updates.
-Apply the differential relay log to another slave.
-Binlog events (binary log events) stored by the application from the master ).
-Upgrade an slave to a new master.
-Connect other slave nodes to the new master node for replication.
3. MHA toolkit:
(1) Manager Tools:
-Masterha_check_ssh: check the SSH configuration of MHA.
-Masterha_check_repl: check MySQL replication.
-Masterha_manager: start MHA.
-Masterha_check_status: checks the Current MHA running status.
-Masterha_master_monitor: monitors whether the master node is down.
-Masterha_master_switch: controls failover (automatic or manual ).
-Masterha_conf_host: add or delete the server information of the configuration.
(2) Node tools (these tools are usually triggered by MHA Manager scripts without manual operations ).
-Save_binary_logs: stores and copies the binary logs of the master.
-Apply_diff_relay_logs: identifies differential relay log events and applies them to other slave events.
-Filter_mysqlbinlog: Removes unnecessary ROLLBACK events (MHA no longer uses this tool ).
-Purge_relay_logs: clears relay logs (SQL threads are not blocked ).
II. host deployment
Manager machine: 192.168.1.201manager installation: mha4mysql-manager-0.54-0.el6.noarch.rpmmaster machine: 192.168.1.231 node installation: mha4mysql-node-0.54-0.el6.noarch.rpmslave1 machine: 192.168.1.20.( slave master) node installation: mha4mysql-node-0.54-0.el6.noarch.rpm
3. First, use ssh-keygen to achieve mutual key-free login between the three hosts
[Manager -- 201] 1. Generate the certificate shell> ssh-keygen-t rsa-B 2048 // always press enter to generate directly. Shell> scp id_rsa.pub root@192.168.1.231:/root /. ssh // reproduce to host 231 shell> scp id_rsa.pub root@192.168.1.232:/root /. ssh // reproduce to host 2322. in host 231 and 232/root /. run cat id_rsa.pub> authorized_keys // to import the public key to/root /. 3. test 201 password-less login to 231, 232ssh 192.168.1.231ssh 192.168.1.133 [node -- 231, 232] Repeat the above steps. Ssh is used to test whether password-free logon is enabled for any two hosts.
Note: If you cannot implement password-free logon between two hosts, the subsequent steps may be problematic.
4. install MHAmha4mysql-node, mha4mysql-manager software package
1. manager mha4mysql-manager software installation
[manager--201]shell> yum install perlshell> yum install cpanshell> rpm -ivh mha4mysql-manager-0.53-0.el6.noarch.rpmerror:perl(Config::Tiny) is needed by mha4mysql-manager-0.53-0.noarchperl(Log::Dispatch) is needed by mha4mysql-manager-0.53-0.noarchperl(Log::Dispatch::File) is needed by mha4mysql-manager-0.53-0.noarchperl(Log::Dispatch::Screen) is needed by mha4mysql-manager-0.53-0.noarchperl(Parallel::ForkManager) is needed by mha4mysql-manager-0.53-0.noarchperl(Time::HiRes) is needed by mha4mysql-manager-0.53-0.noarch
The solution is as follows:
shell> wget ftp://ftp.muug.mb.ca/mirror/centos/5.10/os/x86_64/CentOS/perl-5.8.8-41.el5.x86_64.rpm shell> wget ftp://ftp.muug.mb.ca/mirror/centos/6.5/os/x86_64/Packages/compat-db43-4.3.29-15.el6.x86_64.rpm shell> wget http://downloads.naulinux.ru/pub/NauLinux/6x/i386/sites/School/RPMS/perl-Log-Dispatch-2.27-1.el6.noarch.rpm shell> wget http://dl.fedoraproject.org/pub/epel/6/i386/perl-Parallel-ForkManager-0.7.9-1.el6.noarch.rpm shell> wget http://dl.fedoraproject.org/pub/epel/6/i386/perl-Mail-Sender-0.8.16-3.el6.noarch.rpm shell> wget http://dl.fedoraproject.org/pub/epel/6/i386/perl-Mail-Sendmail-0.79-12.el6.noarch.rpm shell> wget http://mirror.centos.org/centos/6/os/x86_64/Packages/perl-Time-HiRes-1.9721-136.el6.x86_64.rpm
shell> rpm -ivh perl-Parallel-ForkManager-0.7.9-1.el6.noarch.rpm perl-Log-Dispatch-2.27-1.el6.noarch.rpm perl-Mail-Sender-0.8.16-3.el6.noarch.rpm perl-Mail-Sendmail-0.79-12.el6.noarch.rpm perl-Time-HiRes-1.9721-136.el6.x86_64.rpm
Reinstall
shell> rpm -ivh mha4mysql-manager-0.53-0.el6.noarch.rpm
2. install node MHAmha4mysql-node software
Shell> wget http://mirror.centos.org/centos/6/ OS /x86_64/Packages/perl-DBD-MySQL-4.013-3.el6.x86_64.rpm
Shell> rpm-ivh perl-DBD-MySQL-4.013-3.el6.x86_64.rpm
Shell> wget http://mysql-master-ha.googlecode.com/files/mha4mysql-node-0.54-0.el6.noarch.rpm
Shell> rpm-ivh mha4mysql-node-0.54-0.el6.noarch.rpm
An error may be reported during the installation process (I did not note the specific error) because it is a dependent package problem. solve this problem.
Shell> yum install perl-MIME-Lite
Yum install perl-Params-Validate
V. MHA configuration
1. configure the MHA file on the manager
Shell> mkdir-p/masterha/app1 // create the directory shell> mkdir/etc/masterha // create the directory shell> vi/etc/masterha/app1.cnf // create the configuration file [server default] user = root // linux is used to manage mysql. use secret name password = sunney // linux to manage mysql password manager_workdir =/masterha/app1manager_log =/masterha/app1/manager. logremote_workdir =/masterha/app1ssh_user = root // account name for ssh key-free logon repl_user = sunney // mysql account copy, repl_password = sunney // mysql password ping_interval = 1 // ping interval, used to check whether the master is normal [server1] hostname = 192.168.1.231 # ssh_port = 9999master_binlog_dir =/var/lib/mysql // different installation methods of the mysql database directory are different. candidate_master = 1 // master machine after the downtime, enable this master as the new master [server2] hostname = 192.168.1.20.# ssh_port = 9999master_binlog_dir =/var/lib/mysqlcandidate_master = 1
2. masterha_check_ssh tool to verify that the ssh trusted logon is successful
[manager:201]shell> masterha_check_ssh --conf=/etc/masterha/app1.cnf
Note: Use ssh-keygen to enable mutual key-free login between the three hosts to determine whether this step is successful.
Wed Apr23 22:10:01 2014 - [debug] ok.Wed Apr23 22:10:01 2014 - [info] All SSH connection tests passed successfully.
Successful!
3. the masterha_check_repl tool verifies that mysql replication is successful.
[manager:201]shell> masterha_check_repl --conf=/etc/masterha/app1.cnf
Note: The master-slaver in the previous article determines whether this step is successful. Or the user account configured in the MHA file.
Wed Apr 23 22:10:56 2014 - [info] Checking replication health on 192.168.1.232..Wed Apr 23 22:10:56 2014 - [info]ok.Wed Apr 23 22:10:56 2014 - [warning] master_ip_failover_script is not defined.Wed Apr 23 22:10:56 2014 - [warning] shutdown_script is not defined.Wed Apr 23 22:10:56 2014 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.
Successful.
4. start MHA manager and monitor log files
[Manager: 201] shell> nohup masterha_manager -- conf =/etc/masterha/app1.cnf>/tmp/mha_manager.log 2> & 1 shell> tail-f/masterha/app1/manager. log // it is best to execute this in a new window
Result:
Thu Apr 24 04:41:03 2014 - [info] Slaves settings check done.Thu Apr 24 04:41:03 2014 - [info] 192.168.1.231 (current master) +--192.168.1.232Thu Apr 24 04:41:03 2014 - [warning] master_ip_failover_script is not defined.Thu Apr 24 04:41:03 2014 - [warning] shutdown_script is not defined.Thu Apr 24 04:41:03 2014 - [info] Set master ping interval 1 seconds.Thu Apr 24 04:41:03 2014 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.Thu Apr 24 04:41:03 2014 - [info] Starting ping health check on 192.168.1.231(192.168.1.231:3306)..Thu Apr 24 04:41:03 2014 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
5. test whether automatic failover is performed after the master (231) is down.
[master--231]shell>service mysql stop
[Manager -- 201] shell> tail-f/masterha/app1/manager. log ----- the log is displayed as follows -------------- Failover Report ----- app1: MySQL Master failover 192.168.1.231 to 192.168.1.20.succeededmaster 192.168.1.231 is down! Check MHA Manager logs at localhost. localdomain:/masterha/app1/manager. log for details. started automation (non-interactive) failover. the latest slave 192.168.1.133 (192.168.1.20.: 3306) has all relay logs for recovery. selected 192.168.1.20.as a new master.192.168.1.20.: OK: Applying all logs succeeded. generating relay diff files from the latest slave succeeded.192.168.1.133: Resetting slave info succeeded. master failover to 192.168.1.20.( 192.168.1.20.: 3306) completed successfully.
6. after failover, use commands to restore the original master
(1) execute on the old master
1. execute on the old master
Shell> service mysql start // database startup
Shell> mysql-usunney-psunneymysql> reset master; mysql> change master to master_host = '2017. 168.1.232 ', master_port = 3306, master_user = 'sunne', master_password = 'sunne', master_log_file = 'MySQL-bin.000031', master_log_pos = 112; mysql> start slave; # temporarily change the old master to slave
(2) then on the manager node:
shell> masterha_master_switch --master_state=alive --conf=/etc/masterha/app1.cnf
Always enter YES during the process;
In this case, the mysql master/slave machine is switched. You can test whether data synchronization has been implemented in the table corresponding to the new slave (232) after adding data to the new master (231) table.
6. MHA has been configured and tested. However, if the application does not automatically switch the IP address when connecting to the database, how can we achieve this? please look forward to an article on keepalive + VIP installation configuration.