mha0.56 version installation using troubleshooting

Source: Internet
Author: User
Tags failover

If you do not know the MHA, it is recommended to look at the following links to the corresponding blog post.


http://os.51cto.com/art/201307/401702.htm //This blog post sets up the pre-MHA of the construction of the group written very clearly

http://blog.itpub.net/26230597/viewspace-1570798/ //The installation process on this blog post is more specific and has been written with MHA in addition to support automatic failover, you can also do manual failover

http://www.dataguru.cn/thread-457284-1-1.html //This post explains MHA's configuration parameters and other information very clearly.

http://467754239.blog.51cto.com/4878013/1695175 //This post describes the entire MHA switching process. The blog also describes the script with the virtual IP transfer of mah, which I understand should not need to be keepalive. But how to put the problem of the Lord into MHA as a new slave device seems to have some problems


If you know something about MHA, you can read it directly.

Environment: CentOS 6.5

MySQL 5.7 (yum installation)

mha0.56

master:192.168.21.10

backup:192.168.21.11

slave:192.168.21.12

Yum Installation MHA

1 Installing the Epel source

2 Download the official descriptionof the RPM package:https://code.google.com/p/mysql-master-ha/

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/8A/F1/wKiom1g_s7HzsLM8AADJIA5iJUc594.png-wh_500x0-wm_3 -wmp_4-s_4037437297.png "title=" 1.png "alt=" Wkiom1g_s7hzslm8aadjia5ijuc594.png-wh_50 "/>

Follow the link below to do MHA experiments

http://blog.csdn.net/lichangzai/article/details/50470771 Blog Links


Here are some of the problems I encountered during the experiment, all of which were performed

Masterha_check_repl--conf=/etc/masterha/app1/app1.cnf occurred

Some problems are difficult to find solutions online. Now share to everyone.

Question 1

[Email protected] ~]# MASTERHA_CHECK_REPL--conf=/etc/mha/app1.conf

Fri Jul 09:08:54-[WARNING] Global configuration file/etc/masterha_default.cnf not found. Skipping.

Fri Jul 09:08:54-[INFO] Reading application default configuration from/etc/mha/app1.conf.

Fri Jul 09:08:54-[INFO] Reading server configuration from/etc/mha/app1.conf.

Fri Jul 09:08:54-[INFO] mha::mastermonitor version 0.56.

Fri Jul 09:08:54-[INFO] GTID Failover mode = 0

Fri Jul 09:08:54-[info] Dead Servers:

Fri Jul 09:08:54-[info] Alive Servers:

Fri Jul 09:08:54-[info] 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:08:54-[info] 192.168.21.11 (192.168.21.11:3306)

Fri Jul 09:08:54-[info] 192.168.21.12 (192.168.21.12:3306)

Fri Jul 09:08:54-[info] Alive Slaves:

Fri Jul 09:08:54-[info] 192.168.21.11 (192.168.21.11:3306) version=5.7.16 (oldest major Version between slaves ) log-bin:disabled

Fri Jul 09:08:54-[info] replicating from 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:08:54-[INFO] Primary candidate for the new Master (Candidate_master is set)

Fri Jul 09:08:54-[info] 192.168.21.12 (192.168.21.12:3306) version=5.7.16 (oldest major Version between slaves ) log-bin:disabled

Fri Jul 09:08:54-[info] replicating from 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:08:54-[INFO] current Alive master:192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:08:54-[info] Checking slave configurations.

Fri Jul 09:08:54-[info] Read_only=1 is not set on slave 192.168.21.11 (192.168.21.11:3306).

Fri Jul 09:08:54-[WARNING] relay_log_purge=0 is isn't set on slave 192.168.21.11 (192.168.21.11:3306).

Fri Jul 09:08:54-[WARNING] log-bin is isn't set on slave 192.168.21.11 (192.168.21.11:3306). This host cannot is a master.

Fri Jul 09:08:54-[info] Read_only=1 is not set on slave 192.168.21.12 (192.168.21.12:3306).

Fri Jul 09:08:54-[WARNING] relay_log_purge=0 is isn't set on slave 192.168.21.12 (192.168.21.12:3306).

Fri Jul 09:08:54-[WARNING] log-bin is isn't set on slave 192.168.21.12 (192.168.21.12:3306). This host cannot is a master.

Fri Jul 09:08:54-[INFO] Checking replication filtering settings.

Fri Jul 09:08:54-[info] binlog_do_db=, binlog_ignore_db= MySQL

Fri Jul 09:08:54-[info] Replication filtering check OK.

Fri Jul 09:08:54-[error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln361] None of slaves can be master. Check failover configuration file or Log-bin settings in MY.CNF

Fri Jul 09:08:54-[error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln424] error happened on checking co  Nfigurations. At/usr/bin/masterha_check_repl Line 48

Fri Jul 09:08:54-[error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln523] error happened on monitoring Servers.

Fri Jul 09:08:54-[INFO] Got exit code 1 (not master dead).

650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/8A/F1/wKiom1g_tIiAZsVsAAEYBXXRkcg802.png-wh_500x0-wm_3 -wmp_4-s_1766599935.png "title=" 2.png "alt=" Wkiom1g_tiiazsvsaaeybxxrkcg802.png-wh_50 "/>

Workaround:

In two from the library to open the binary log can (spend a day, can't find a solution, and finally rely on their own understanding and testing to solve, pride!!) The specific configuration is not posted.


Question 2

[Email protected] ~]# MASTERHA_CHECK_REPL--conf=/etc/mha/app1.conf

Fri Jul 09:26:48-[WARNING] Global configuration file/etc/masterha_default.cnf not found. Skipping.

Fri Jul 09:26:48-[INFO] Reading application default configuration from/etc/mha/app1.conf.

Fri Jul 09:26:48-[INFO] Reading server configuration from/etc/mha/app1.conf.

Fri Jul 09:26:48-[INFO] mha::mastermonitor version 0.56.

Fri Jul 09:26:48-[INFO] GTID Failover mode = 0

Fri Jul 09:26:48-[info] Dead Servers:

Fri Jul 09:26:48-[info] Alive Servers:

Fri Jul 09:26:48-[info] 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:26:48-[info] 192.168.21.11 (192.168.21.11:3306)

Fri Jul 09:26:48-[info] 192.168.21.12 (192.168.21.12:3306)

Fri Jul 09:26:48-[info] Alive Slaves:

Fri Jul 09:26:48-[info] 192.168.21.11 (192.168.21.11:3306) version=5.7.16-log (oldest major Version between SL Aves) log-bin:enabled

Fri Jul 09:26:48-[info] replicating from 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:26:48-[INFO] Primary candidate for the new Master (Candidate_master is set)

Fri Jul 09:26:48-[info] 192.168.21.12 (192.168.21.12:3306) version=5.7.16-log (oldest major Version between SL Aves) log-bin:enabled

Fri Jul 09:26:48-[info] replicating from 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:26:48-[INFO] current Alive master:192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:26:48-[info] Checking slave configurations.

Fri Jul 09:26:48-[info] Read_only=1 is not set on slave 192.168.21.11 (192.168.21.11:3306).

Fri Jul 09:26:48-[WARNING] relay_log_purge=0 is isn't set on slave 192.168.21.11 (192.168.21.11:3306).

Fri Jul 09:26:48-[info] Read_only=1 is not set on slave 192.168.21.12 (192.168.21.12:3306).

Fri Jul 09:26:48-[WARNING] relay_log_purge=0 is isn't set on slave 192.168.21.12 (192.168.21.12:3306).

Fri Jul 09:26:48-[INFO] Checking replication filtering settings.

Fri Jul 09:26:48-[info] binlog_do_db=, binlog_ignore_db= MySQL

Fri Jul 09:26:48-[error][/usr/share/perl5/vendor_perl/mha/servermanager.pm, ln443] Binlog filtering check Faile D on 192.168.21.11 (192.168.21.11:3306)! All Log-bin enabled servers must has same Binlog filtering rules (same binlog-do-db and binlog-ignore-db). Check SHOW MASTER STATUS output and set my.cnf correctly.

650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M02/8A/ED/wKioL1g_tlaDtwMNAADY2Wuq4uI721.png-wh_500x0-wm_3 -wmp_4-s_710747242.png "title=" 3.png "alt=" Wkiol1g_tladtwmnaady2wuq4ui721.png-wh_50 "/>

Workaround:

I'm in the Lord. Replication filtering, must also be turned on from the top, after modifying the configuration file can not reload, need to restart.


Question 3

[Email protected] ~]# MASTERHA_CHECK_REPL--conf=/etc/mha/app1.conf

Fri Jul 09:30:04-[WARNING] Global configuration file/etc/masterha_default.cnf not found. Skipping.

Fri Jul 09:30:04-[INFO] Reading application default configuration from/etc/mha/app1.conf.

Fri Jul 09:30:04-[INFO] Reading server configuration from/etc/mha/app1.conf.

Fri Jul 09:30:04-[INFO] mha::mastermonitor version 0.56.

Fri Jul 09:30:04-[INFO] GTID Failover mode = 0

Fri Jul 09:30:04-[info] Dead Servers:

Fri Jul 09:30:04-[info] Alive Servers:

Fri Jul 09:30:04-[info] 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:30:04-[info] 192.168.21.11 (192.168.21.11:3306)

Fri Jul 09:30:04-[info] 192.168.21.12 (192.168.21.12:3306)

Fri Jul 09:30:04-[info] Alive Slaves:

Fri Jul 09:30:04-[info] 192.168.21.11 (192.168.21.11:3306) version=5.7.16-log (oldest major Version between SL Aves) log-bin:enabled

Fri Jul 09:30:04-[info] replicating from 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:30:04-[INFO] Primary candidate for the new Master (Candidate_master is set)

Fri Jul 09:30:04-[info] 192.168.21.12 (192.168.21.12:3306) version=5.7.16-log (oldest major Version between SL Aves) log-bin:enabled

Fri Jul 09:30:04-[info] replicating from 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:30:04-[INFO] current Alive master:192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:30:04-[info] Checking slave configurations.

Fri Jul 09:30:04-[info] Read_only=1 is not set on slave 192.168.21.11 (192.168.21.11:3306).

Fri Jul 09:30:04-[WARNING] relay_log_purge=0 is isn't set on slave 192.168.21.11 (192.168.21.11:3306).

Fri Jul 09:30:04-[info] Read_only=1 is not set on slave 192.168.21.12 (192.168.21.12:3306).

Fri Jul 09:30:04-[WARNING] relay_log_purge=0 is isn't set on slave 192.168.21.12 (192.168.21.12:3306).

Fri Jul 09:30:04-[INFO] Checking replication filtering settings.

Fri Jul 09:30:04-[info] binlog_do_db=, binlog_ignore_db= MySQL

Fri Jul 09:30:04-[info] Replication filtering check OK.

Fri Jul 09:30:04-[error][/usr/share/perl5/vendor_perl/mha/server.pm, ln393] 192.168.21.11 (192.168.21.11:3306 ): User REPL does not exist or does not has REPLICATION SLAVE privilege! Other slaves can not start replication from the this host.

Fri Jul 09:30:04-[error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln424] error happened on checking co  Nfigurations. AT/USR/SHARE/PERL5/VENDOR_PERL/MHA/SERVERMANAGER.PM Line 1403

Fri Jul 09:30:04-[error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln523] error happened on monitoring Servers.

Fri Jul 09:30:04-[INFO] Got exit code 1 (not master dead).

650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M00/8A/F1/wKiom1g_tuawENWFAAD4mqNYtmU125.png-wh_500x0-wm_3 -wmp_4-s_155824687.png "title=" 4.png "alt=" Wkiom1g_tuawenwfaad4mqnytmu125.png-wh_50 "/>

Workaround:

Users with copy permissions must be created on all nodes, as well as users with administrative privileges, and these two points are not clear on many blogs on the web.


Question 4

[Email protected] ~]# MASTERHA_CHECK_REPL--conf=/etc/mha/app1.conf

Fri Jul 09:42:46-[WARNING] Global configuration file/etc/masterha_default.cnf not found. Skipping.

Fri Jul 09:42:46-[INFO] Reading application default configuration from/etc/mha/app1.conf.

Fri Jul 09:42:46-[INFO] Reading server configuration from/etc/mha/app1.conf.

Fri Jul 09:42:46-[INFO] mha::mastermonitor version 0.56.

Fri Jul 09:42:46-[INFO] GTID Failover mode = 0

Fri Jul 09:42:46-[info] Dead Servers:

Fri Jul 09:42:46-[info] Alive Servers:

Fri Jul 09:42:46-[info] 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:42:46-[info] 192.168.21.11 (192.168.21.11:3306)

Fri Jul 09:42:46-[info] 192.168.21.12 (192.168.21.12:3306)

Fri Jul 09:42:46-[info] Alive Slaves:

Fri Jul 09:42:46-[info] 192.168.21.11 (192.168.21.11:3306) version=5.7.16-log (oldest major Version between SL Aves) log-bin:enabled

Fri Jul 09:42:46-[info] replicating from 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:42:46-[INFO] Primary candidate for the new Master (Candidate_master is set)

Fri Jul 09:42:46-[info] 192.168.21.12 (192.168.21.12:3306) version=5.7.16-log (oldest major Version between SL Aves) log-bin:enabled

Fri Jul 09:42:46-[info] replicating from 192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:42:46-[INFO] current Alive master:192.168.21.10 (192.168.21.10:3306)

Fri Jul 09:42:46-[info] Checking slave configurations.

Fri Jul 09:42:46-[info] Read_only=1 is not set on slave 192.168.21.11 (192.168.21.11:3306).

Fri Jul 09:42:46-[WARNING] relay_log_purge=0 is isn't set on slave 192.168.21.11 (192.168.21.11:3306).

Fri Jul 09:42:46-[info] Read_only=1 is not set on slave 192.168.21.12 (192.168.21.12:3306).

Fri Jul 09:42:46-[WARNING] relay_log_purge=0 is isn't set on slave 192.168.21.12 (192.168.21.12:3306).

Fri Jul 09:42:46-[INFO] Checking replication filtering settings.

Fri Jul 09:42:46-[info] binlog_do_db=, binlog_ignore_db= MySQL

Fri Jul 09:42:46-[info] Replication filtering check OK.

Fri Jul 09:42:47-[info] GTID (with Auto-pos) was not supported

Fri Jul 09:42:47-[INFO] Starting SSH connection tests.

Fri Jul 09:42:48-[INFO] All SSH connection tests passed successfully.

Fri Jul 09:42:48-[info] Checking MHA Node version:

Fri Jul 09:42:49-[info] Version check OK.

Fri Jul 09:42:49-[info] Checking SSH publickey authentication settings on the current master.

Fri Jul 09:42:49-[info] healthcheck:ssh to 192.168.21.10 is reachable.

Fri Jul 09:42:49-[info] Master MHA Node version is 0.56.

Fri Jul 09:42:49-[INFO] Checking Recovery script configurations on 192.168.21.10 (192.168.21.10:3306):

Fri Jul 09:42:49-[info] executing command:save_binary_logs--command=test--start_pos=4--binlog_dir=/logs/my Sqllog/mysql-bin--output_file=/var/tmp/save_binary_logs_test--manager_version=0.56--start_file= mysql-bin.000001

Fri Jul 09:42:49-[info] connecting to [email protected] (192.168.21.10:22).

Failed to save binary log:binlog not found from/logs/mysqllog/mysql-bin! If you got the MHA Manager, please set the "Master_binlog_dir=/path/to/binlog_directory_of_the_master" correctly in The MHA Manager ' s configuration file and try again.

At/usr/bin/save_binary_logs Line 123

eval {...} called At/usr/bin/save_binary_logs line 70

Main::main () called At/usr/bin/save_binary_logs Line 66

Fri Jul 09:42:49-[error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln158] Binlog setting Check failed!

Fri Jul 09:42:49-[error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln405] Master configuration failed .

Fri Jul 09:42:49-[error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln424] error happened on checking co  Nfigurations. At/usr/bin/masterha_check_repl Line 48

Fri Jul: 42:49-[ERROR][/USR/SHARE/PERL5/VENDOR_PERL/MHA/MASTERMONITOR.PM, ln523] error happened on M onitoring servers.

Fri Jul 09:42:49-[INFO] Got exit code 1 (not master dead).

650) this.width=650; "Src=" Http://s4.51cto.com/wyfs02/M02/8A/F1/wKiom1g_udmQRuRoAAF0Ji8xuXE044.png-wh_500x0-wm_3 -wmp_4-s_2779101526.png "title=" 5.png "alt=" Wkiom1g_udmqruroaaf0ji8xuxe044.png-wh_50 "/>

Workaround:

If you manually define the path to the binary log file, you must make the directory where the master_binlog_dir= ' binary log files are located in the MHA configuration file


Summary: With the MHA version introduced by Me blog post, it should be necessary to open the binary log in all databases, the relay log, the authorization should be the same, the configuration file is basically the same. I think in this premise in the installation of the implementation of MHA should not encounter too many problems. But it is not yet certain that this is a positive solution.


This article is from the "bit accumulation" blog, please be sure to keep this source http://16769017.blog.51cto.com/700711/1878451

mha0.56 version installation using troubleshooting

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.