Mmm high-availability MySQL program

Source: Internet
Author: User
Tags db2 readable haproxy

The MMM high-availability MySQL program is a well-developed MySQL high-availability cluster solution written in Perl, based on MySQL master-slave replication, with a Management side (monitor) and multiple agent-side (agent) components. Through MMM, we can monitor and manage the status of MySQL master replication and service, as well as monitor the replication and operation status of multiple slave nodes, and can realize the function of automatic switching when any node fails. In the entire cluster, only one master is writable at the same time.
The main functions of the MMM suite are implemented by three scripts:
1.mmm_mond
This is a monitoring process that runs on the management node and is primarily responsible for monitoring all databases, while determining and processing the role switching of all nodes.
2.mmm_agentd
This is an agent process that runs on each MySQL server, primarily to complete the testing of the monitoring and to perform simple remote service settings.
3.mmm_control
This is a simple administrative script that is used to view and manage the running state of a cluster while managing the Mmm_mond process

Disadvantages:
MMM architecture needs multiple nodes, multiple IP, the number of servers required; Secondly, MMM scheme is not very stable under the service system of reading and writing very busy, it may have problems such as replication delay, switching failure, etc. Therefore, MMM scheme is not very suitable for the data security requirements are very high, and read, write frequently in the environment.

MMM has a variety of application architectures, the simplest of which is the operating environment of two nodes. Such as:

In the dual master architecture implemented through the MMMT suite, 5 IP addresses, a fixed physical IP address of two maser nodes, two read-only IP and a writable IP are required, and three virtual IPs are not fixed on any one node, instead, She will switch back and forth between the two master nodes, and how to switch depends on the availability of the nodes.
Under normal circumstances (system, network normal, MySQL service normal, master-slave replication is normal, no replication delay, etc.), Master1 has two virtual IP (reader IP and wirter IP), Master2 has a virtual IP (reader IP), If Master1 fails, all reader and writer Virtual IPs will be assigned to Master2.
On the basis of the dual master node, multiple slave nodes can be added to realize the dual master multi-slave node application architecture. Such as:

The MySQL architecture of the dual master multiple slave node is suitable for the business environment with very large read query volume. This architecture uses two master to achieve high availability of MySQL write operations, and then adds multiple slave nodes on the master backend, all slave nodes can only read query operations, and multiple slave nodes through LVS, Load balancing software such as Haproxy for MySQL read operation.

This architecture requires 5 separate servers, one of the MMM management nodes, two MySQL master nodes, and two MySQL slave nodes.
Dual master dual Slave cluster configuration environment
? hostname physical IP cluster role server_id
Monitor 192.168.88.11 mmm manager none
Master1 192.168.88.20 Master Master readable, writable 1
Master2 192.168.88.21 Master readable, writable 2
Slave1 192.168.88.22 Salve node read-only 3
Slave2 192.168.88.23 Salve node read-only 4
Dual master dual Slave application architecture read, write separate IP list
Virtual IP Address IP role description
192.168.88.30 writer IP application configuration write VIP, only support single-point write
192.168.88.31 reader IP The read query VIP in the application configuration, one read VIP per database node, can load balance the read VIP with load balancer software such as Lvs,haproxy
192.168.88.32 reader IP
192.168.88.33 Reader IP
192.168.88.34 Reader IP

Installation and configuration of MMM
1.MMM Kit Installation
#安装epel源https://FEDORAPROJECT.ORG/WIKI/EPEL/ZH-CN
RPM-UVH https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
#在Monitor节点执行如下命令
Yum-y Install mysql-mmm*
#每个MySQL db node as long as you install Mysql-mmm-agent
Yum-y Install Mysql-mmm-agent
Configuration of the 2.MMM cluster suite
Before the MMM package configuration, you need to configure Master to Master2 master to synchronize each other, Master1 to Slave1, Slave2 master-slave synchronization. It is important to note that when you configure Slave1, Slave2, and master synchronization, the address of ' Master_host ' is populated with the physical IP of the Master1 node, not the virtual IP.
Because the MMM cluster suite has strict control over the reading and writing of the data, according to the MMM management mechanism, we need to set the READ_ONLY parameters on all MySQL hosts first, that is, add the/etc/my.cnf mysqld configuration section:
READ_ONLY = 1

Create two additional accounts, monitor user and monitor Agent, in addition to the copy account on all MySQL nodes:

GRANT REPLICATION CLIENT on . To ' mmm_monitor ' @ ' 192.168.88.% ' identified by ' 123456 ';
GRANT super,replication client,process on . To ' mmm_agent ' @ ' 192.168.88.% ' identified by ' 123456 ';

The main concerns are 4 configuration files: mmm_mon.conf, mmm_common.conf, Mysql-mmm-agent, and mmm_agent.conf. Among them, mmm_mon.conf only in the MMM management side configuration, mmm_common.conf need to be configured in all MMM cluster nodes, the content is exactly the same, mmm_agent.conf also need to be configured on all MySQL nodes.
(1) Configuring the mmm_common.conf file

Active_master_role writer #当设置这个参数时 that all MySQL nodes in the cluster should be set ' read_only= 1 ' parameter, so that MMM will be based on the role of each node dynamic judgment, when the MMM set to write the role, will automatically execute the ' set global read_only = 0 ' operation on the writable node, that is, open Write permissions, other read-only characters remain ' read_only= 1 ' read-only permission,

Cluster_interface eth0 #配置的网络接口, this cannot be specified as a sub-interface, such as eth0:0
Pid_path/run/mysql-mmm-agent.pid #设定PID文件位置
bin_path/usr/libexec/mysql-mmm/#设定MMM可执行文件路径
Replication_user Repl_user #设定复制的用户名
Replication_password 123456 #设定复制的密码
Agent_user mmm_agent #设定更改只读操作的用户
Agent_password 123456 #设定更改操作用户的密码

IP 192.168.88.20 #设定db1的物理ip地址
Mode Master #设置db1的角色为Master
Peer DB2 #设置与db1对等的主机名, that is, DB1 and DB2 are both master roles

IP 192.168.88.21 #设定db2的物理ip地址
Mode Master #设置db2的角色为Master
Peer DB1 #设置与db2对等的主机名, that is, DB1 and DB2 are both master roles
IP 192.168.88.22 #设定db3的物理ip地址
mode slave #设置db3的角色为Slave
IP 192.168.88.23 #设定db4的物理ip地址
mode slave #设置db4的角色为Slave

<role writer> #设置可写角色模式
Hosts DB1, DB2 #设置可执行写操作的主机
IPs 192.168.100.250 #设置可写的虚拟IP地址
Mode exclusive #设置角色模式为互斥, the mutex role has only one IP and can only be assigned to a host at the same time, the general writer role is exclusive mode
</role>

<role reader> #设置可读角色模式
Hosts Db1,db2,db3, DB4 #设置可读操作的主机
IPs 192.168.88.31,192.168.88.32,192.168.88.33,192.168.88.34, #设置可读的虚拟IP地址
Mode balanced #设置角色的模式为负载均衡, in the load balancing role, can have multiple IPs, which are distributed dynamically to multiple MySQL hosts, the general reader role is balanced mode
</role>

(2) Configuring the mmm_agent.conf file

Include mmm_common.conf
This db1 #对应的主机名, Master2 corresponding host name is DB2

(3) Configuring the mmm_mon.conf file
Include mmm_common.conf

<monitor>
IP 127.0.0.1 #为了安全性, set only on native listener, default port is 9988
Pid_path/run/mysql-mmm-monitor.pid
Bin_path/usr/libexec/mysql-mmm
Status_path/var/lib/mysql-mmm/mmm_mond.status
Ping_ips 192.168.88.1,192.168.88.20,192.168.88.21,192.168.88.22,192.168.88.23 #用于测试网络可用性的ip地址列表, as long as one of the addresses can ping, On behalf of the network normal, here is not important, write to the local IP address
Flap_duration 3600 #抖动的时间范围, default 3600s
Flap_count 3 #在抖动的时间范围内, maximum jitter count
Auto_set_online 0 #抖动的时间范围过后, set the auto-on-line

</monitor>

Monitor_user mmm_monitor #monitor user account
Monitor_password 123456 #monitor User Password
Debug 0 #MMM管理端的运行模式, 1 for debug mode, 0 for normal mode

(4) Configuring the Mysql-mmm-agent file
/etc/default/mysql-mmm-agent, it is to be set in all the MySQL nodes of the MMM cluster, this file contains only one line: enabled=1

The permissions of all profiles in the MMM cluster are best set to 640, otherwise there may be errors when starting the MMM service.

#在MMM集群管理端启动mysql-mmm-monitor Services
Systemctl Start Mysql-mmm-monitor
#在每个agent端依次启动agent服务
Systemctl Start Mysql-mmm-agent

MMM Basic maintenance:

View Run status
One is through the mmm_control provided by the MMM cluster (only on the management side) command, the other is to view the running log information of the MMM cluster, the running log of the MMM cluster is located under the/var/log/mysql-mmm of each cluster node.

Mmm_control Help #显示帮助信息
Ping #测试网络运行状态
Show #显示MMM集群中所有节点的状态
Checks [Show detailed run status for all nodes
Set_online Set_offline Mode #显示MMM集群当前的运行模式 with active, manual, and
Passive three modes, default is active mode
Set_active #切换到active模式
Set_manual #切换到set_manual模式
Set_passive #切换到passive模式
Move_role [--force] <role> Switch from DB1 to DB2
Set_ip <ip>

MMM Integrated Amoeba Application architecture

#下载amoeba-mysql-3.0.5-rc-distribution.zip
wget Http://jaist.dl.sourceforge.net/project/amoeba/Amoeba%20for%20mysql/3.x/amoeba-mysql-3.0.5-RC-distribution.zip
#下载jdk, configure environment variables
Vim/etc/profile
Export java_home=/usr/local/jdk1.8.0_171
Export classpath=.: $JAVA _home/jre/lib/rt,jar: $JAVA _home/lib/dr.jar: $JAVA _home/lib/tools.jar
Export path= $JAVA _home/bin: $PATH

The amoeba configuration file is much, but only with the read and write separation function, you only need to configure the Dbservers.xml and Amoeba.xml, and if you need to configure IP access control, you will have to modify the access_list.conf file.

Mmm high-availability MySQL program

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.