MMM high-availability MySQL service cluster Solution
Introduction to MMM high availability solutions
MMM (Master-Master Replication Manager for MySQL) Master Replication Manager is a scalable script program that provides monitoring, failover, and management of MySQL Master Replication configurations. In the MMM high-availability solution, you can configure a dual-master, multi-slave architecture. using MySQL Replication technology, two MySQL servers can be mutually master-slave, and only one node can be written at any time, avoid data conflicts written by multiple nodes. When a writable node fails, the MMM suite can immediately monitor the data and automatically switch the service to another master node to continue providing services, to achieve high availability of MySQL.
In short, MMM can be used to monitor and manage the master-Slave replication and service status of MySQL. It can also monitor the replication and running status of multiple Slave nodes, in addition, automatic failover can be implemented when any node fails. MMM also provides a good platform for MySQL to read and write separation architectures.
Advantages and disadvantages of MMM Kit
The MMM cluster suite provides good stability, high availability, and scalability. When active Master nodes fail, the Slave Master node can take over immediately, while other Slave nodes can automatically switch to the Slave Master node to continue synchronous replication without human interference; the MMM architecture requires multiple nodes and multiple IP addresses, which have requirements on the number of servers. The performance of business systems that are very busy in reading and writing is not very stable, replication latency, failover failure, and other issues may occur. The MMM solution is not suitable for environments with high data security requirements and busy reading and writing.
Working principle of MMM High Availability Kit
The main functions of MMM kit are implemented through the following three scripts:
1. mmm_mond: the monitoring process runs on the Management node. It mainly copies the monitoring work on all databases and determines and processes the Role Switching of all nodes.
2. mmm_agentd: the Agent process runs on each MySQL server. It completes monitoring testing and executes simple remote service settings.
3. mmm_control: A simple management script used to view and manage the cluster running status and manage the mmm_mond process.
MMM high-availability MySQL configuration Scheme
In the dual Master architecture implemented through MMM suite, five IP addresses are required. Each Master node has a fixed physical IP address and the other two read-only IP addresses (reader IP addresses) in contrast to a writer IP address, these three virtual IP addresses are not fixed on any node, it will switch back and forth between two Master nodes (how to switch depends on the high availability of the node ). Under normal circumstances, Master1 has two virtual IP addresses (reader IP address and writer IP address), and Master2 has a virtual IP address (reader IP address). If Master1 fails, all the virtual IP addresses of reader and writer are allocated to the Master.
Environment Description:
Host Name |
IP address |
Cluster role |
MySQL version |
System Version |
Master1 |
192.168.1.210 |
Master readable and write |
Mysql-5.6.28 |
CentOS6.7 |
Master2 |
192.168.1.211 |
Read and Write of the standby Master |
Mysql-5.6.28 |
CentOS6.7 |
Slave1 |
192.168.1.250 |
Slave node read-only |
Mysql-5.6.28 |
CentOS6.7 |
Slave2 |
192.168.1.209 |
Slave node read-only |
Mysql-5.6.28 |
CentOS6.7 |
Monitor |
192.168.1.21 |
MMM Service Management end |
Mysql-5.6.28 |
CentOS6.7 |
Virtual IP Address:
Writer IP |
192.168.1.230 |
Write VIP. Only single-node write is supported. |
Reader IP |
192.168.1.231 |
Read-Only VIP. Each database node has one read VIP. You can use Server Load balancer software such as LVS and HAproxy to balance the read VIP. |
Reader IP |
192.168.1.20. |
Reader IP |
192.168.1.233 |
Reader IP |
192.168.1.234 |
MMM installation and configuration
Step 1: Install the MMM Kit
1. Install all MMM kits on the MMM console monitor
[root@monitor~]#rpm-ivhepel-release-6-8.noarch.rpm[root@monitor~]#yuminstallmysql-mmmmysql-mmm-agentmysql-mmm-toolsmysql-mmm-monitor
2. Install the MySQL-mmm-agent service on each mysql node.
[root@master1~]#yuminstallmysql-mmm-agent[root@master2~]#yuminstallmysql-mmm-agent[root@slave1~]#yuminstallmysql-mmm-agent[root@slave2~]#yuminstallmysql-mmm-agent
Step 2: Configure master and Slave nodes on Master1 and two Slave instances (the master and Slave nodes of Master1 and Master2 must be configured in advance)
1. Server Load balancer instance 1 and Server Load balancer instance 2 are authorized to copy users on Master1.
[root@master1~]#mysql-uroot-ppasswdmysql>grantreplicationslaveon*.*to'repl'@'192.168.1.250'identifiedby'replpasswd';mysql>grantreplicationslaveon*.*to'repl'@'192.168.1.209'identifiedby'replpasswd';mysql>flushprivileges;
2. Set Master1 synchronous replication on Slave1 and 2
[root@slave1~]#mysql-uroot-ppasswdmysql>changemasterto->master_host='192.168.1.210',->master_user='repl',->master_password='replpasswd',->master_port=3306,->master_log_file='mysql-bin.000034',->master_log_pos=120;QueryOK,0rowsaffected,2warnings(0.06sec)
[root@slave2~]#mysql-uroot-ppasswdmysql>changemasterto->master_host='192.168.1.210',->master_user='repl',->master_password='replpasswd',->master_port=3306,->master_log_file='mysql-bin.000034',->master_log_pos=120;QueryOK,0rowsaffected,2warnings(0.02sec)
Step 3: add the parameter to/etc/my. cnf of all MySQL nodes.
Read_only = 1
Step 4: Add the following two users to all MySQL nodes:
Mysql> grant replication client on *. * to 'mmm _ monitor' @ '192. 168.1.% 'identified by 'monitorpasswd'; mysql> grant super, replication client, process on *. * to 'mmm _ agent' @ '192. 168.1.% 'identified by 'agentpasswd ';Step 5: Configure mmm_common.conf on the MMM console monitor.
[Root @ monitor ~] # Ls/etc/mysql-mmm/mmm_agent.confmmm_common.conf # configure the same mmm_mon.conf on all nodes # configure mmm_mon_log.confmmm_tools.conf only on the MMM Console
Vim/etc/mysql-mmm/mmm_common.conf all MMM node configurations are the same
active_master_rolewriter
cluster_interfaceeth0pid_path/var/run/mysql-mmm/mmm_agentd.pidbin_path/usr/libexec/mysql-mmm/replication_userreplicationreplication_passwordreplicationagent_usermmm_agentagent_passwordagentpasswd
ip192.168.1.210modemasterpeerdb2
ip192.168.1.211modemasterpeerdb1
ip192.168.1.209modeslave
ip192.168.1.250modeslave
hostsdb1,db2ips192.168.1.230modeexclusive
hostsdb1,db2,db3,db4ips192.168.1.231,192.168.1.232,192.168.1.233,192.168.1.234modebalanced
Step 6: Configure mmm_mon.conf on the MMM management Node
[Root @ monitor ~] # Vim/etc/mysql-mmm/mmm_mon.conf
includemmm_common.conf
ip127.0.0.1pid_path/var/run/mysql-mmm/mmm_mond.pidbin_path/usr/libexec/mysql-mmmstatus_path/var/lib/mysql-mmm/mmm_mond.statusping_ips192.168.1.1,192.168.1.2,192.168.1.210,192.168.1.211,192.168.1.209,192.168.1.250flap_duration3600flap_count3auto_set_online8#Thekill_host_bindoesnotexistbydefault,thoughthemonitorwill#throwawarningaboutitmissing.Seethesection5.10"KillHost#Functionality"inthePDFdocumentation.##kill_host_bin/usr/libexec/mysql-mmm/monitor/kill_host#
monitor_usermmm_monitormonitor_passwordmonitorpasswddebug0
Step 7: Configure mmm_agent.conf on all MySQL nodes
[Root @ master1mysql] # vim/etc/mysql-mmm/mmm_agent.confincludemmm_common.confthisdb1 # Set the corresponding db on four mysql nodes, namely db1, db2, db3, and db4
Step 8: ENABLED = 1 for all nodes
cat/etc/default/mysql-mmm-agent#mysql-mmm-agentdefaultsENABLED=1
Step 9: Start the MMM Service
Start the service on the MMM Console
[Root @ monitor ~] #/Etc/init. d/mysql-mmm-monitor startStarting MMM Monitor Daemon: [OK]
Start the service on each mysql Node
[Root @ master1 ~] #/Etc/init. d/mysql-mmm-agent startStarting MMM Agent Daemon: [OK]
View Cluster running status
[root@monitormysql-mmm]#mmm_controlshowdb1(192.168.1.210)master/AWAITING_RECOVERY.Roles:db2(192.168.1.211)master/AWAITING_RECOVERY.Roles:db3(192.168.1.209)slave/AWAITING_RECOVERY.Roles:db4(192.168.1.250)slave/AWAITING_RECOVERY.Roles:
If the above AWAITING_RECOVERY status appears, you can manually set each MySQL node to the online status.
[root@monitor~]#mmm_controlset_onlinedb1OK:Stateof'db1'changedtoONLINE.Nowyoucanwaitsometimeandcheckitsnewroles![root@monitor~]#mmm_controlset_onlinedb2OK:Stateof'db2'changedtoONLINE.Nowyoucanwaitsometimeandcheckitsnewroles![root@monitor~]#mmm_controlset_onlinedb3OK:Stateof'db3'changedtoONLINE.Nowyoucanwaitsometimeandcheckitsnewroles![root@monitor~]#mmm_controlset_onlinedb4OK:Stateof'db4'changedtoONLINE.Nowyoucanwaitsometimeandcheckitsnewroles![root@monitor~]#mmm_controlshowdb1(192.168.1.210)master/ONLINE.Roles:reader(192.168.1.234),writer(192.168.1.230)db2(192.168.1.211)master/ONLINE.Roles:reader(192.168.1.231)db3(192.168.1.209)slave/ONLINE.Roles:reader(192.168.1.232)db4(192.168.1.250)slave/ONLINE.Roles:reader(192.168.1.233)
Check the running status of each node
[root@monitor~]#mmm_controlchecksalldb4ping[lastchange:2016/02/2705:13:57]OKdb4mysql[lastchange:2016/02/2705:13:57]OKdb4rep_threads[lastchange:2016/02/2705:13:57]OKdb4rep_backlog[lastchange:2016/02/2705:13:57]OK:Backlogisnulldb2ping[lastchange:2016/02/2705:13:57]OKdb2mysql[lastchange:2016/02/2705:13:57]OKdb2rep_threads[lastchange:2016/02/2705:13:57]OKdb2rep_backlog[lastchange:2016/02/2705:13:57]OK:Backlogisnulldb3ping[lastchange:2016/02/2705:13:57]OKdb3mysql[lastchange:2016/02/2705:13:57]OKdb3rep_threads[lastchange:2016/02/2705:13:57]OKdb3rep_backlog[lastchange:2016/02/2705:13:57]OK:Backlogisnulldb1ping[lastchange:2016/02/2705:13:57]OKdb1mysql[lastchange:2016/02/2705:13:57]OKdb1rep_threads[lastchange:2016/02/2705:13:57]OKdb1rep_backlog[lastchange:2016/02/2705:13:57]OK:Backlogisnull
Step 10: view the virtual IP Address Allocation of each node
Master1
[root@master1~]#ipa|grepeth02:eth0:
mtu1500qdiscpfifo_faststateUPqlen1000inet192.168.1.210/24brd192.168.1.255scopeglobaleth0inet192.168.1.234/32scopeglobaleth0inet192.168.1.230/32scopeglobaleth0
Mster2
[root@master2~]#ipa|grepeth02:eth0:
mtu1500qdiscpfifo_faststateUPqlen1000inet192.168.1.211/24brd192.168.1.255scopeglobaleth0inet192.168.1.231/32scopeglobaleth0
Slave1
[root@slave1~]#ipa|grepeth02:eth0:
mtu1500qdiscpfifo_faststateUPqlen1000inet192.168.1.250/24brd192.168.1.255scopeglobaleth0inet192.168.1.213/32scopeglobaleth0inet192.168.1.233/32scopeglobaleth0
Slave2
[root@slave2~]#ipa|grepeth02:eth0:
mtu1500qdiscpfifo_faststateUPqlen1000inet192.168.1.209/24brd192.168.1.255scopeglobaleth0inet192.168.1.232/32scopeglobaleth0
Step 11: Test MMM for MySQL High Availability
1. Authorize a user who can remotely log on to the cluster using VIP
Mysql> grant all on *. * to 'hm '@' 192. 168.1.% 'identified by '20170101'; 2. log in with VIP192.168.1.230 and perform related tests to check whether the database is synchronized on each node.
[Root @ monitor ~] # Mysql-uhm-p741616710-h192.168.1.230
mysql>showvariableslike'hostname%';+---------------+---------+|Variable_name|Value|+---------------+---------+|hostname|master1|+---------------+---------+1rowinset(0.01sec)mysql>createdatabasetest1;QueryOK,1rowaffected(0.00sec)mysql>usetest1Databasechangedmysql>createtablett1(idint,namevarchar(20));QueryOK,0rowsaffected(0.13sec)mysql>insertintott1(id,name)values(1,'july'),(2,'dime');QueryOK,2rowsaffected(0.04sec)Records:2Duplicates:0Warnings:0mysql>select*fromtt1;+------+------+|id|name|+------+------+|1|july||2|dime|+------+------+2rowsinset(0.00sec)
Step 12: test the MMM failover Function
1. Disable the MySQL service on Master1 and check the status.
[root@monitor~]#mmm_controlshowdb1(192.168.1.210)master/HARD_OFFLINE.Roles:db2(192.168.1.211)master/ONLINE.Roles:reader(192.168.1.231),writer(192.168.1.230)db3(192.168.1.209)slave/ONLINE.Roles:reader(192.168.1.232),reader(192.168.1.234)db4(192.168.1.250)slave/ONLINE.Roles:reader(192.168.1.233)
[root@monitor~]#mmm_controlset_onlinedb1OK:Stateof'db1'changedtoONLINE.Nowyoucanwaitsometimeandcheckitsnewroles![root@monitor~]#mmm_controlshowdb1(192.168.1.210)master/ONLINE.Roles:reader(192.168.1.232)db2(192.168.1.211)master/ONLINE.Roles:reader(192.168.1.231),writer(192.168.1.230)db3(192.168.1.209)slave/ONLINE.Roles:reader(192.168.1.234)db4(192.168.1.250)slave/ONLINE.Roles:reader(192.168.1.233)
2. After the Master1 fault is restored, if you want the VIP to continue to go back to Master1, you can follow the following manual settings:
[root@monitor~]#mmm_controlmove_rolewriterdb1OK:Role'writer'hasbeenmovedfrom'db2'to'db1'.Nowyoucanwaitsometimeandchecknewrolesinfo!
[root@monitor~]#mmm_controlshowdb1(192.168.1.210)master/ONLINE.Roles:reader(192.168.1.232),writer(192.168.1.230)db2(192.168.1.211)master/ONLINE.Roles:reader(192.168.1.231)db3(192.168.1.209)slave/ONLINE.Roles:reader(192.168.1.234)db4(192.168.1.250)slave/ONLINE.Roles:reader(192.168.1.233)
MMM high-availability MySQL combined with Amoeba for read/write splitting
Step 1: Install Amoeba
Prepare the sixth server except the MMM cluster as the Amoeba server.
1. Amoeba is developed based on Java, so you need to install the Java environment
[root@amoeba~]#java-versionjavaversion"1.8.0_65"Java(TM)SERuntimeEnvironment(build1.8.0_65-b17)JavaHotSpot(TM)64-BitServerVM(build25.65-b01,mixedmode)
2. Download amoeba-mysql-3.0.5-rc-distribution.zip
Http://nchc.dl.sourceforge.net/project/amoeba/Amoeba%20for%20mysql/3.x/amoeba-mysql-3.0.5-RC-distribution.zip
3. decompress the package to the/usr/local/directory.
[root@amoebasrc]#unzipamoeba-mysql-3.0.5-RC-distribution.zip[root@amoebasrc]#mvamoeba-mysql-3.0.5-RC/usr/local/amoeba[root@amoeba~]#ls/usr/local/amoeba/benchmarkbinconfjvm.propertieslib
Step 2: Configure Amoeba
[Root @ amoeba ~] # Vim/usr/local/amoeba/conf/dbServers. xml
<!--{cke_protected}{C}%3C!%2D%2D%3Fxmlversion%3D%221.0%22encoding%3D%22gbk%22%3F%2D%2D%3E-->
[Root @ amoeba ~] # Vim/usr/local/amoeba/conf/amoeba. xml
<!--{cke_protected}{C}%3C!%2D%2D%3Fxmlversion%3D%221.0%22encoding%3D%22gbk%22%3F%2D%2D%3E--><!--{cke_protected}{C}%3C!%2D%2D%0AEachConnectionManagerwillstartasthread%0AmanagerresponsiblefortheConnectionIOread%2CDeathDetection%0A%2D%2D%3E--><connectionmanagerlist></connectionmanagerlist><!--{cke_protected}{C}%3C!%2D%2Ddefaultusingfileloader%2D%2D%3E-->${amoeba.home}/conf/dbServers.xml${amoeba.home}/conf/rule.xml${amoeba.home}/conf/ruleFunctionMap.xml${amoeba.home}/conf/functionMap.xml1500writedbwritedbmyslavestrue
Start Amoebafu Service
[root@amoebalocal]#/usr/local/amoeba/bin/launcher&atcom.meidusa.toolkit.net.ServerableConnectionManager.willStart(ServerableConnectionManager.java:144)atcom.meidusa.toolkit.net.util.LoopingThread.run(LoopingThread.java:59)2015-10-2921:00:44[INFO]ProjectName=Amoeba-MySQL,PID=25948,Systemshutdown....2015-10-2921:01:34[INFO]ProjectName=Amoeba-MySQL,PID=25996,starting...log4j:WARNlog4jconfigloadcompletedfromfile:/usr/local/amoeba/conf/log4j.xml2015-10-2921:01:34,715INFOcontext.MysqlRuntimeContext-AmoebaforMysqlcurrentversoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETAlog4j:WARNipaccessconfigloadcompletedfromfile:/usr/local/amoeba/conf/access_list.conf2015-10-2921:01:35,065INFOnet.ServerableConnectionManager-Serverlisteningon0.0.0.0/0.0.0.0:8066.JavaHotSpot(TM)64-BitServerVMwarning:ignoringoptionPermSize=16m;supportwasremovedin8.0JavaHotSpot(TM)64-BitServerVMwarning:ignoringoptionMaxPermSize=96m;supportwasremovedin8.02015-10-2921:11:40[INFO]ProjectName=Amoeba-MySQL,PID=26119,starting...log4j:WARNlog4jconfigloadcompletedfromfile:/usr/local/amoeba/conf/log4j.xml2015-10-2921:11:41,446INFOcontext.MysqlRuntimeContext-AmoebaforMysqlcurrentversoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETAlog4j:WARNipaccessconfigloadcompletedfromfile:/usr/local/amoeba/conf/access_list.conf2015-10-2921:11:41,843INFOnet.ServerableConnectionManager-Serverlisteningon0.0.0.0/0.0.0.0:8066.
View java Processes
[root@amoeba~]#netstat-ntlp|grepjavatcp00:::8066:::*LISTEN26119/java
Test the load balancing function of Amoeba.
[root@monitor~]#mysql-uroot-p741616710-h192.168.1.31-P8066Warning:Usingapasswordonthecommandlineinterfacecanbeinsecure.WelcometotheMySQLmonitor.Commandsendwith;or\g.YourMySQLconnectionidis102888364Serverversion:5.1.45-mysql-amoeba-proxy-3.0.4-BETASourcedistributionCopyright(c)2000,2015,Oracleand/oritsaffiliates.Allrightsreserved.OracleisaregisteredtrademarkofOracleCorporationand/oritsaffiliates.Othernamesmaybetrademarksoftheirrespectiveowners.Type'help;'or'\h'forhelp.Type'\c'toclearthecurrentinputstatement.mysql>usetest1Databasechangedmysql>select*fromtt1;+------+---------+|id|name|+------+---------+|210|master1||211|master2|+------+---------+2rowsinset(0.00sec)mysql>select*fromtt1;+------+---------+|id|name|+------+---------+|210|master1||250|slave1|+------+---------+2rowsinset(0.00sec)mysql>select*fromtt1;+------+---------+|id|name|+------+---------+|210|master1||209|slave2|+------+---------+2rowsinset(0.01sec)mysql>select*fromtt1;+------+---------+|id|name|+------+---------+|210|master1||211|master2|+------+---------+2rowsinset(0.01sec)mysql>select*fromtt1;+------+---------+|id|name|+------+---------+|210|master1||211|master2|+------+---------+2rowsinset(0.01sec)mysql>select*fromtt1;+------+---------+|id|name|+------+---------+|210|master1||250|slave1|+------+---------+2rowsinset(0.01sec)