MySQL-MMM solution,
Reference:
This article briefly introduces the mmm solution and provides a simple verification.
I. MySQL-MMM solution 1. Introduction to MMM Solution
MMM (Multi-Master Replication Manager for MySQL) is a set of script suites for monitoring, failover, and management of MySQL Multi-Master Replication, and load balancing for read requests, it also provides a good read/write splitting architecture for MySQL.
Note:
2. Introduction to MMM Kit
The main functions of MMM kit are implemented through the following three scripts:
Ii. Verify the environment 1. Operating System
CentOS-6.7-x86_64
2. MySQL version
MySQL is 5.6.36: https://cdn.mysql.com//Downloads/MySQL-5.6/mysql-5.6.36.tar.gz
3. Topology
Role |
Hosts |
IP |
Attributes |
VIP |
Description |
Master1 |
Master |
10.11.4.196 |
Write |
10.11.4.191 |
Write Through VIP |
Master2 |
Backup |
10.11.4.197 |
Write | read |
10.11.4.192 10.11.4.193 |
Master2 only provides read services when master1 is not faulty, while master1 takes over VIP |
Slave |
Slave |
10.11.4.198 |
Read |
Monitor |
Mmm |
10.11.4.199 |
Monitor |
|
|
Iii. Configuration 1. Operating System Configuration hosts
# Four servers have the same configuration [root @ master ~] # Vim/etc/hosts10.11.4.196 master10.11.4.197 backup10.11.4.198 slave10.11.4.198 mmm
2. Create an account
User |
Password |
Privileges |
Description |
Mmm_monitor |
Mmm_monitor |
REPLICATION CLIENT |
Monitor database status, including Master/Slave latency |
Mmm_agent |
Mmm_agent |
SUPER, replication client, PROCESS |
Modify the read_only status of the write server and redirect to the new master database. |
Repl |
Repl |
REPLICATION SLAVE |
Master/Slave replication user(Pre-configured) |
# You only need to create an account on three db servers. The monitoring server does not need [root @ master ~] # Mysql-uroot-pEnter password: mysql> grant replication client on *. * TO 'mmm _ monitor' @ '10. 11.4.% 'identified BY 'mmm _ monitor'; mysql> grant super, replication client, process on *. * TO 'mmm _ agent' @ '10. 11.4.% 'identified BY 'mmm _ agent'; mysql> flush privileges;
3. Install MMM 1) 3 database servers
# The default yum source does not contain mmm. You need to install epel first. # You only need to install mysql-mmm-agnet [root @ master ~] on the database server. # Wget requests ~] # Rpm-ivh epel-release-6-8.noarch.rpm [root @ master ~] # Yum-y install mysql-mmm-agent
2) Monitoring Server
# The monitoring server can install all the mmm solution components. In fact, only the mysql-mmm-monitor component [root @ mmm ~] is started. # Wget requests ~] # Rpm-ivh Co., epel-release-6-8.noarch.rpm. [root @ mmm ~] # Yum-y install mysql-mmm *
3) MMM file path description
Path |
Description |
/Usr/libexec/mysql-mmm/ |
Script path |
/Usr/share/perl5/vendor_perl/MMM/ |
Perl path of MMM |
/Usr/sbin/ |
Executable command |
/Etc/init. d/ |
Start the service |
/Etc/mysql-mmm/ |
Configuration File |
/Var/log/mysql-mmm/ |
Log Files |
4. Configuration File 1) mmm_common.conf File
# Mmm_common.conf file, which must be configured on both the monitoring server and the database server, and the configuration is consistent; # The following is the configuration as planned [root @ mmm ~] # Vim/etc/mysql-mmm/mmm_common.conf # indicates the active master role. The read_only parameter must be enabled for all db servers. The Monitoring Agent automatically closes the read_only attribute of the writer server. active_master_role writer 2) mmm_mon.conf File
# Mmm_mon.conf file, Monitoring Server File configuration [root @ mmm ~] # Vim/etc/mysql-mmm/mmm_mon.confinclude mmm_common.conf <monitor> ip 127.0.0.1 pid_path/var/run/mysql-mmm/mmm_mond.pid bin_path/usr/libexec/mysql-mmm # cluster Status File, the original data status_path/var/lib/mysql-mmm/mmm_mond.status of the mmm_control show operation # The monitored db Server IP address ping_ips 10.11.4.196, 10.11.4.197, 10.11.4.198 # The default value is 60 s; # After the fault is restored, nodes with more than 60 s of offline will remain in the AWAITING_RECOVERY state. You need to manually set_online; # nodes with less than 60 s of offline and not in the flapping state, automatically online auto_set_online 20 # The kill_host_bin does not exist by default, though the monitor will # throw a warning about it missing. see the section 5.10 "Kill Host # Functionality" in the PDF documentation. # kill_host_bin/usr/libexec/mysql-mmm/monitor/kill_host # </monitor> 3) mmm_agent.conf File
# Mmm_agent.conf file, configure the proxy file on the three db servers [root @ master ~] # Vim/etc/mysql-mmm/mmm_agent.confinclude mmm_common.conf # The parameter after this is the host name of the current server; # modify the host Name of the three db servers according to the host name, this master is not required for the monitoring server
5. Start the process 1) monitoring/Management
# The monitoring server is not set when it is started by default; # If the configuration file is changed, the monitoring server or the db server must restart the process [root @ mmm ~] # Chkconfig -- level 35 mysql-mmm-monitor on [root @ mmm ~] # Service mysql-mmm-monitor start
2) proxy
# Operations on three db servers are similar to [root @ master ~] # Chkconfig -- level 35 mysql-mmm-agent on [root @ master ~] # Service mysql-mmm-agent start
6. iptables
# The mmm_agent service opens the local tcp9989 port for access by the monitoring/management end. The firewall is required to allow access. # The operations on the three db servers are similar to [root @ master ~]. # Vim/etc/sysconfig/iptables-a input-m state -- state NEW-m tcp-p tcp -- dport 9989-j ACCEPT [root @ master ~] # Service iptables restart
Iv. Verification 1. cluster node status
# Check whether the ping, mysql, and replication threads are normal [root @ mmm ~] # Mmm_control checks all
2. Cluster role status
# Each role has obtained the vip [root @ mmm ~] # Mmm_control show
3. Perform offline/online operations on the node. 1) offline
# The status of the backup node is "ADMIN_OFFLINE"; # The slave node obtains two read VIPs at the same time [root @ mmm ~] # Mmm_control set_offline backup [root @ mmm ~] # Mmm_control show
2) online
# Before the status of the backup node is restored to "ONLINE", there is a short "REPLICATION_FAIL" status [root @ mmm ~] # Mmm_control set_online backup [root @ mmm ~] # Mmm_control show
4. Execute the write switch operation. 1) view the current master of slave.
# The current master node is the master node [root @ slave ~] # Mysql-uroot-pxxxxxx-e 'show slave status \ G ;'
2) write Switch
# The write role has been switched to the backup node [root @ mmm ~] # Mmm_control move_role writer backup [root @ mmm ~] # Mmm_control show
3) check whether the master corresponding to the slave is switched
# The current master is the backup node [root @ slave ~] # Mysql-uroot-pXxxxxx-E 'show slave status \ G ;'
5. Summary 1. Logs
Monitoring end:/var/log/mysql-mmm/mmm_mond.log
AGENT:/var/log/mysql-mmm/mmm_agentd.log
2. Commands
In addition to common mmm_control commands, the mmm solution also provides the following commands:
Mmm_backup: Backup File
Mmm_restore: restore a file
Mmm_clone: clone a file
3. Notes