MySQL mha+keepalived solution demo (i)

Source: Internet
Author: User
Tags mysql version ssh install perl ssh port

The whole MHA cluster environment construction process demonstrates one. Lab Environment Description

Installing the MHA operation steps
The MHA node contains three scripts, dependent on the Perl module.
Save_binary_logs: Save and copy the lost primary server binary log
Apply_diff_relay_logs: Identify relay log events for differences and apply to other salve servers
Purge_relay_logs: Clear Relay log file
The MHA node needs to be installed on all MySQL servers, and the MHA Management Server needs to be installed. The MHA Management node module internally relies on the MHA node module. The MHA Management node manages the MySQL server and executes MHA node scripts over an SSH connection. The MHA node relies on Perl's Dbd::mysql module.

1.1. Introduction to the Environment

1.1.1, vmvare virtual machine, System version CentOS6.5 x86_64 bit minimized installation, MySQL version 5.7.21,
1.1.2, virtual machine SSH port is the default 22,
1.1.3, virtual machine iptables all shut down,
1.1.4, the SELinux of the virtual machine are all shut down,
1.1.5, virtual machine server time all consistent ntpdate 0.asia.pool.ntp.org
1.1.6, 3 machines have SSH ports of 12570

1.2, the test uses 3 machines, the specific deployment of the machine is as follows:
角色                   IP地址(内网)    主机名称      节点机器部署服务                                  业务用途Master               192.168.2.128     server02      mha4mysql-node-0.56-0.el6                      写入                                                                                mha4mysql-manager-0.56-0.el6                                                                                    keepalivedslave(备master) 192.168.2.129     server03      mha4mysql-node-0.56-0.el6                    读                                                                                    keepalivedSlave+Monitor   192.168.2.130     server04      mha4mysql-node-0.56-0.el6       读+备份数据
1.3 Description Introduction:

Server03 and Server04 are server02 slave from the library, the replication environment will be a simple demonstration behind, where master provides write services, alternative master (actual slave, hostname server03) to provide read services, Slave also provides related reading services, and once master is down, it will
Select Master promotion to new Master,slave point to new master
SERVER04 Deployment Monitor (MHA Manager monitoring), mainly to monitor the master-slave replication of the main library in the cluster master is normal, once master hangs, MHA Manager will automatically complete the main library and slave from the library automatically switch

Two. HMA specific deployment Process 2.1 master-slave replication construction

2.1.1 Main Library master configuration file (192.168.2.128 machine)
[Email protected] ~]# CAT/ETC/MY.CNF

[Client]port = 3306socket =/tmp/mysql.sock[mysql]no-auto-rehash[mysqld]user = Mysqlport = 3306s Ocket =/tmp/mysql.sockbasedir =/usr/local/mysqldatadir =/data/mysql/databack_log = 2000open_files_limit = 1024max_con Nections = 800max_connect_errors = 3000max_allowed_packet = 33554432external-locking = FALSEcharacter_set_server = utf8# Binloglog-slave-updates = 1binlog_format = Rowlog-bin =/data/mysql/logs/bin-log/mysql-binexpire_logs_days = 5sync_ Binlog = 1binlog_cache_size = 1mmax_binlog_cache_size = 1mmax_binlog_size = 2m#replicate-ignore-db= Mysqlskip-name-resolveslave-skip-errors = 1032,1062,# #skip_slave_start =1skip_slave_start=0# #read_only =1# #relay_ log_purge=0## #relay logrelay-log =/data/mysql/logs/relay-log/relay-binrelay-log-info-file =/data/mysql/ relay-log.info## #slow_logslow_query_log = 1slow-query-log-file =/data/mysql/logs/mysql-slow.loglog-error =/data/ mysql/logs/error.log# #GTIDserver_id = 1103# #gtid_mode =on# #enforce_gtid_consistency =onevent_scHeduler = Oninnodb_autoinc_lock_mode = 1innodb_buffer_pool_size = 10737418innodb_data_file_path = Ibdata1:10M: Autoextendinnodb_data_home_dir =/data/mysql/datainnodb_log_group_home_dir =/data/mysql/datainnodb_file_per_table = 1innodb_flush_log_at_trx_commit = 2innodb_flush_method = o_directinnodb_io_capacity = 2000innodb_log_buffer_size = 8388608innodb_log_files_in_group = 3innodb_max_dirty_pages_pct = 50innodb_open_files = 512innodb_read_io_threads = 8innodb_thread_concurrency = 20innodb_write_io_threads = 8innodb_lock_wait_timeout = 10innodb_buffer_pool_load_at_ startup = 1innodb_buffer_pool_dump_at_shutdown = 1key_buffer_size = 3221225472innodb_log_file_size = 1Glocal_infile = 1log_bin_trust_function_creators = 1log_output = Filelong_query_time = 1myisam_sort_buffer_size = 33554432join_buffer_ Size = 8388608tmp_table_size = 33554432net_buffer_length = 8192performance_schema = 1performance_schema_max_table_ instances = 200query_cache_size = 0query_cache_type = 0read_buffer_size = 20971520read_rnd_buffer_size = 16mmax_heap_table_size = 33554432bulk_insert_buffer_size = 134217728secure-file-priv =/data /mysql/tmpsort_buffer_size = 2097152table_open_cache = 128thread_cache_size = 50tmpdir =/data/mysql/ Tmpslave-load-tmpdir =/data/mysql/tmpwait_timeout = 120transaction_isolation=read-committedinnodb_flush_log_at_ Trx_commit=0lower_case_table_names=1[mysqldump]quickmax_allowed_packet = 64m[mysqld_safe]log-error =/data/mysql/ Logs/error.logpid-file =/data/mysql/mysqld.pid

2.1.2 Slave from the library configuration file
192.168.2.129 Machine when the main fault, the slave will be promoted to the new master
Unlike the parameters in the MySQL configuration file my.cnf in the 192.168.1.128master main library
129 turn on the following 2 parameters on the machine
Read_only=1
Relay_log_purge=0
The value of Server-id must not be the same as the 128 master library.

192.168.2.130 machine
Upgrade MySQL read-only, as well as data backup, already monitor monitoring

Unlike the parameters in the MySQL configuration file my.cnf in the 192.168.1.128master main library
129 turn on the following 2 parameters on the machine
Read_only=1
Relay_log_purge=0
At the same time the value of Server-id must be different from the 128 main library and 129 from the library

2.1.3 Parameters Detailed Description
1) Each from the library should be set relay_log_purge=0
Otherwise, you receive the following alarm message mysql-e ' Set global relay_log_purge=0 ' dynamically modifies this parameter, because the slave is promoted to master at any time.

2) Each from the Library settings Read_only=1
Otherwise, you receive the following alarm message mysql-e ' Set global Read_only=1 ' dynamically modifies this parameter, because slave is promoted to master at any time

2.1.4 Master-Slave copy operation
1) Main Library Create copy account
Main Library Operations :
Server02 Main Library Operations:
mysqldump-uroot-p123456--master-data=2--single-transaction-r--triggers-a >/root/all.sql
Where--master-data=2 represents the backup time record master's binlog location and position,--single-transaction meaning is to get a consistent snapshot,-r means backup stored procedures and functions,-- Triggres means a backup trigger, and-a represents backing up all libraries.

grant replication slave on *.* to ‘repmha‘@‘192.168.2.%‘ identified by ‘123456‘;flush privilegs;scp -rp /root/all.sql [email protected]:/rootscp -rp /root/all.sql [email protected]:/root

From library operations:
SLAVE03 and SLAVE04 2 operations from the Vault:

mysql -uroot -p123456</root/all.sqlCHANGE MASTER TOMASTER_HOST=‘192.168.2.128‘,MASTER_PORT=3306,MASTER_USER=‘repmha‘,MASTER_PASSWORD=‘123456‘;start slave;show slave status\G

Master-Slave replication environment to this configuration complete

2.2 Creating a MySQL management user monitor

2.2.1 Removing redundant users

mysql> drop user [email protected]‘localhost‘;mysql> select user,host from mysql.user;+------+--------------+| user | host         |+------+--------------+| root | 127.0.0.1    || rep  | 192.168.10.% |+------+--------------+3 rows in set (0.00 sec

2.2.2 Creating monitoring Monitor Management Account

mysql> grant all on *.* to [email protected]‘192.168.10.%‘ identified by ‘123456‘; Query OK, 0 rows affected (0.01 sec)

Before the test MHA master-Slave synchronization error, is the database authorization management problem, not the use of localhost to manage the database, but using the manager to manage the intranet network segment to check synchronization.

2.3. Two slave server settings read_only

(from the library to provide read services, the reason is not written into the configuration file, because at any time slave will be promoted to master)

192.168.2.129 [root ~]$ mysql -uroot -p123456 -e "set global read_only=1"192.168.2.130 [root ~]$ mysql -uroot -p123456 -e "set global read_only=1"
2.4. Configure SSH login without password authentication

(use key to log in, work commonly used, it is best not to disable password login, if banned, there may be problems)

在server02 192.168.2.128操作(Master):192.168.2.128 [root ~]$ ssh-keygen -t rsa192.168.2.128 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]192.168.2.128 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]在server03 192.168.2.129操作(slave):192.168.2.129 [root ~]$ ssh-keygen -t rsa192.168.2.129 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]192.168.2.129 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]在server04 192.168.2.130操作(slave+Monitor):192.168.2.130 [root ~]$ ssh-keygen -t rsa192.168.2.130 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]192.168.2.130 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
2.5. Installing the MHA process

2.5.1 Creating a Package Server storage directory

mkdir /data/tools -pcd tools/rz mha4mysql-node-0.56-0.el6.noarch.rpmrz mha4mysql-manager-0.56-0.el6.noarch.rpm

Install MHA node on all nodes: (below take server02 as an example, remember Server03 and Server04 also the same operation), MHA node and MHA Manager are on the official website to download,
: https://code.google.com/p/mysql-master-ha/wiki/Downloads?tm=2 (self-owned XXX)

2.5.2 Installing MHA Node
Install on all servers

yum install -y perl-DBD-MySQLrpm -ihv mha4mysql-node-0.56-0.el6.noarch.rpm

2.5.3. Installing MHA Manager
Installing Mha4mysql-manager and Mha4mysql-node on the 192.168.2.130 machine

yum -y install perl-Parallel-ForkManager  perl-Log-Dispatch  perl-Time-HiRes perl-Mail-Sender perl-Mail-Sendmail perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Config-IniFilesrpm -ihv mha4mysql-node-0.56-0.el6.noarch.rpmrpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm

This is to demonstrate just the example of installing Mha4mysql-node and Mha4mysql-manager demos on 192.168.2.130 machines
A. Install with RPM package

  [[email protected] ~]# RPM-IVH mha4mysql-node-0.56-0.el6.noarch The following package appears after successful installation, which represents a successful installation [[email  Protected] ~]# RPM-QL mha4mysql-node-0.56-0.el6.noarch/usr/bin/apply_diff_relay_logs/usr/bin/filter_mysqlbinlog/ usr/bin/purge_relay_logs/usr/bin/save_binary_logs/usr/share/man/man1/apply_diff_relay_logs.1.gz/usr/share/man/ Man1/filter_mysqlbinlog.1.gz/usr/share/man/man1/purge_relay_logs.1.gz/usr/share/man/man1/save_binary_logs.1.gz /usr/share/perl5/vendor_perl/mha/binlogheaderparser.pm/usr/share/perl5/vendor_perl/mha/binlogmanager.pm/usr/ share/perl5/vendor_perl/mha/binlogposfindmanager.pm/usr/share/perl5/vendor_perl/mha/binlogposfinder.pm/usr/ share/perl5/vendor_perl/mha/binlogposfinderelp.pm/usr/share/perl5/vendor_perl/mha/binlogposfinderxid.pm/usr/ Share/perl5/vendor_perl/mha/nodeconst.pm/usr/share/perl5/vendor_perl/mha/nodeutil.pm/usr/share/perl5/vendor_ PERL/MHA/SLAVEUTIL.PM  

Node Script Description: (These tools are usually triggered by MHA manager scripts, without human action)
Save_binary_logs//Save and copy the binary log of master
Apply_diff_relay_logs//Identify the difference between the relay log events and apply their differential events to other slave
Filter_mysqlbinlog//Remove unnecessary rollback events (MHA no longer use this tool)
Purge_relay_logs//Clearing the trunk log (does not block SQL threads)
The four commands that were created in the installation of mha4mysql-node-0.56-0.el6.noarch.rpm

[[email protected] ~]#  rpm -ql mha4mysql-manager-0.56-0.el6.noarchpackage mha4mysql-manager-0.56-0.el6.noarch is not installed[[email protected] ~]# rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm Preparing...                ########################################### [100%]   1:mha4mysql-manager      

After successful installation, the following package appears, which indicates successful installation.

[[email protected] ~]# rpm-ql mha4mysql-manager-0.56-0.el6.noarch/usr/bin/masterha_check_repl/usr/bin/ masterha_check_ssh/usr/bin/masterha_check_status/usr/bin/masterha_conf_host/usr/bin/masterha_manager/usr/bin/ masterha_master_monitor/usr/bin/masterha_master_switch/usr/bin/masterha_secondary_check/usr/bin/masterha_stop/ usr/share/man/man1/masterha_check_repl.1.gz/usr/share/man/man1/masterha_check_ssh.1.gz/usr/share/man/man1/ Masterha_check_status.1.gz/usr/share/man/man1/masterha_conf_host.1.gz/usr/share/man/man1/masterha_manager.1.gz /usr/share/man/man1/masterha_master_monitor.1.gz/usr/share/man/man1/masterha_master_switch.1.gz/usr/share/man/ man1/masterha_secondary_check.1.gz/usr/share/man/man1/masterha_stop.1.gz/usr/share/perl5/vendor_perl/mha/ Config.pm/usr/share/perl5/vendor_perl/mha/dbhelper.pm/usr/share/perl5/vendor_perl/mha/filestatus.pm/usr/share/ Perl5/vendor_perl/mha/healthcheck.pm/usr/share/perl5/vendor_perl/mha/manageradmin.pm/usr/share/perl5/vendor_ Perl/mha/manageradminwrapper.pm/usr/share/perl5/vendor_perl/mha/managerconst.pm/usr/share/perl5/vendor_perl/mha/managerutil.pm/usr/ Share/perl5/vendor_perl/mha/masterfailover.pm/usr/share/perl5/vendor_perl/mha/mastermonitor.pm/usr/share/perl5 /vendor_perl/mha/masterrotate.pm/usr/share/perl5/vendor_perl/mha/sshcheck.pm/usr/share/perl5/vendor_perl/mha/ server.pm/usr/share/perl5/vendor_perl/mha/servermanager.pm

Copy the following script to the/usr/local/bin directory

/usr/bin/masterha_check_repl/usr/bin/masterha_check_ssh/usr/bin/masterha_check_status/usr/bin/masterha_conf_host/usr/bin/masterha_manager/usr/bin/masterha_master_monitor/usr/bin/masterha_master_switch/usr/bin/masterha_secondary_check/usr/bin/masterha_stop

Copy the relevant script to the/usr/local/bin directory (after the package has been decompressed, it is not necessary, because these scripts are incomplete and need to be modified by themselves, this is the software development left to us to play, if you open any of the following script corresponding parameters, and the corresponding script here is not modified will be thrown wrong , he was in a terrible pit.

master_ip_failover  //自动切换时vip管理的脚本,不是必须,如果我们使用keepalived的,我们可以自己编写脚本完成对vip的管理,比如监控mysql,如果mysql异常,我们停止keepalived就行,这样vip就会自动漂移                                          master_ip_online_change  //在线切换时vip的管理,不是必须,同样可以可以自行编写简单的shell完成power_manager   //故障发生后关闭主机的脚本,不是必须send_report    //因故障切换后发送报警的脚本,不是必须,可自行编写简单的shell完成

Here, the entire MHA cluster environment has been built.

MySQL mha+keepalived solution demo (i)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.