Brief introduction
MMM (Master-master Replication Manager for MySQL) is a set of scripting programs that support dual-master failover and dual-master daily management. MMM is developed using the Perl language, which is mainly used to monitor and manage MySQL Master-master (dual master) replication, although it is called dual master replication, but only one master is allowed to write at the same time in the business, and the other one provides partial read services to speed up the preheating of the alternate master at the time of primary master switching. Can say mmm this set of scripts to implement the function of failover, on the other hand, its internal additional tool script can also achieve multiple slave read load balancing.
MMM provides both automatic and manual removal of the virtual IP of a server with high replication latency in a set of servers, as well as the ability to back up data, achieve data synchronization between two nodes, and more. Because MMM does not fully guarantee the consistency of data, MMM is suitable for the data consistency requirements are not very high, but also want to maximize the business availability of the scene. For businesses that have a high level of conformance to data, it is highly recommended that you use MMM as a highly available architecture.
MMM project from Google:http://code.google.com/p/mysql-master-master
The official website is: http://mysql-mmm.org
Below we have a practical case to fully understand the internal structure of MMM, as shown in.
The specific configuration information is as follows:
Role IP address host name server-idmonitoring 192.168.0.30 DB2 -master1 192.168.0.60 db1 1master2 192.168.0.50 DB2 2slave1 192.168.0.40 db3 3
The service IP information in the business is as follows:
IP address role description 192.168.0.108 Write application connects the IP to the main library for writing requests 192.168.0.88 read The application connects to the IP for the read request 192.168.0.98 The Read application connects to the IP for reading requests
The specific configuration steps are as follows:
(1) Host Configuration
To configure/etc/hosts, in all hosts, add all host information:
[Email protected] ~]# cat/etc/hosts192.168.0.60 db1192.168.0.50 db2192.168.0.40
(2) First installed in 3 host MySQL and build replication (192.168.0.60 and 192.168.0.50 Mutual mainly from, 192.168.0.40 for 192.168.0.60 from) specific copy of the construction here is omitted, if this is not, then the article is not interesting to you. Then add the following to each MySQL configuration file, noting that server_id cannot be duplicated.
On DB1 (192.168.0.60):
Server-id = 1log_slave_updates = 1auto-increment-increment = 2auto-increment-offset = 1
On DB2 (192.168.0.50):
Server-id = 2log_slave_updates = 1auto-increment-increment = 2auto-increment-offset = 2
On DB3 (192.168.0.40):
Server-id = 3log_slave_updates = 1
The above ID does not have to be in order, as long as there is no repetition.
(3) Install the Perl module required by MMM (all servers) to execute the script, or install the Epel source, and then install the MMM yum-y the install mysql-mmm*:
RPM-IVH http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpmyum-y Install mysql-mmm*
[email protected] ~]# cat install.sh #!/bin/bashwget http://xrl.us/cpanm--no-check-certificatemv cpanm/usr/binchmod 755/usr/bin/cpanmcat >/root/list << eofinstall algorithm::D iffinstall class::singletoninstall DBIinstall DBD :: Mysqlinstall file::basenameinstall file::statinstall file::tempinstall Log::D ispatchinstall Log::Log4perlinstall Mail::sendinstall net::arpinstall Net::P inginstall Proc::D aemoninstall thread::queueinstall time::hireseoffor Package in ' cat/root/list ' do
(4)
Download the Mysql-mmm software and install it on all servers:
[Email protected] ~]# wget http://mysql-mmm.org/_media/:mmm2:mysql-mmm-2.2.1.tar.gz
[[email protected] ~]# mv:mmm2:mysql-mmm-2.2.1.tar.gz mysql-mmm-2.2.1.tar.gz[[email protected] ~]# tar XF mysql-mmm-2.2 .1.tar.gz [[email protected] ~]# CD mysql-mmm-2.2.1[[email protected] mysql-mmm-2.2.1]# make install
Mysql-mmm the main topology after installation is as follows (note: The path for Yum installation and source installation is different):
Directory Introduction/usr/lib/perl5/vendor_perl/5.8.8/mmm mmm uses the main Perl module/usr/lib/mysql-mmm MMM uses the main script/usr/sbin The path of the main command used by MMM/etc/init.d/ MMM agent and monitor start service directory/etc/mysql-mmm mmm configuration file path, default so the configuration file is located in this directory/var/log/mysql-mmm default mmm save log location
Here has completed the basic requirements of MMM, the next need to configure the specific configuration file, where mmm_common.conf,mmm_agent.conf is the agent-side configuration file, mmm_mon.conf is the monitor side of the configuration file.
(5) Configure the agent-side configuration files, which need to be configured separately on the DB1,DB2,DB3.
Configure the agent configuration file on the DB1 host:
[[email protected] ~]# Cd/etc/mysql-mmm/[[email protected] mysql-mmm]# cat mmm_common.conf active_master_role writer replication_user repl Replication_password 123456 Agen T_user mmm_agent Agent_password mmm_agentMode MasterPeer Db2Mode MasterPeer Db1mode slaveIPs192.168.0.108 Mode Exclusive</role><role reader> hosts DB2, DB3IPs192.168.0.88, 192.168.0.98Mode Balanced</role>[[email protected] mysql-mmm]#
Where replication_user is used to check the replication user,Agent_user is the agent user,mode indicates whether the primary or alternate master, or from the library. Mode exclusive master for exclusive mode, at the same time there can only be one master,<role write> hosts represent the current main library and the alternative master real host IP or host name,IPs For externally available virtual machine IP addresses , <role readr> hosts represent the real IP and hostname from the library, andIPs represents the virtual IP address from the library.
Since the DB2 and DB3 two hosts also configure the agent configuration file, we directly copy the mmm_common.conf from DB1 to DB2 and DB3 two hosts.
Note: The monitor host needs to:
Scp/etc/mysql-mmm/mmm_common.conf db2:/etc/mysql-mmm/
Scp/etc/mysql-mmm/mmm_common.conf db3:/etc/mysql-mmm/
Respectively in the DB1,DB2,DB3 three host/etc/mysql-mmm configuration mmm_agent.conf files, respectively, with different character identification, note that this db1 three machines this piece to think, for example, in this environment, DB1 to configure this db1, DB2 to be configured as this DB2, and DB3 to be configured as this db3.
On DB1 (192.168.0.60):
On DB2 (192.168.0.50):
On DB3 (192.168.0.40):
Configure the configuration file for Monitor in DB2 (192.168.0.30):
[Email protected] ~]# cat/etc/mysql-mmm/mmm_mon.conf include mmm_common.conf<monitor> IP 127.0.0.1 pid_path /var/run/mysql-mmm/mmm_mond.pid bin_path /usr/libexec/mysql-mmm status_ Path /var/lib/mysql-mmm/mmm_mond.status ping_ips 192.168.0.40,192.168.0.50,192.168.0.60 Auto _set_online 60</monitor>
Only the ping_ips in the original configuration file is added to the IP address of the host that is monitored by the entire schema, and the user for monitoring is configured in the
(6) Create a monitoring user, here need to create 3 monitoring users, described below:
User name description permission monitor user MMM monitors the status of all MySQL databases REPLICATION clientagent users The main is the MMM client used to change the READ_ONLY state of the master user super,replication client,processrepl for replication user REPLICATION SLAVE
In 3 servers (DB1,DB2,DB3) to authorize, because my previous primary master replication, as well as master and slave are already OK, so I am in one of the server execution is OK. The account used to replicate is already there, so there are two accounts authorized here.
Mysql> GRANT SUPER, REPLICATION CLIENT, PROCESS on * * to ' mmm_agent ' @ ' 192.168.0.% '
If you are building from start to finish, add another copy account (each of the 3 servers will need to execute these 3 SQL):
(7) Start the agent service.
Finally, start the agent on DB1,DB2,DB3 and start the Monitor program on DB2 (192.168.0.30):
[[email protected] ~]#/etc/init.d/mysql-mmm-agent startstarting mmm agent Daemon: [ OK
Because I use Yum to install it, the startup information is somewhat different. ^_^
[[email protected] ~]#/etc/init.d/mysql-mmm-agent startstarting mmm agent Daemon: [ OK
To start Monitor:
[Email protected] ~]#/etc/init.d/mysql-mmm-monitor startstarting mmm monitor Daemon: [ OK
Where the agent logs are stored in the/var/log/mysql-mmm/mmm_agentd.log,monitor log on the/var/log/mysql-mmm/mmm_ Mond.log, there is a problem with the startup process, usually the log will have a detailed record.
(8) Check the status of the cluster host on the monitor host:
[[email protected] ~]# Mmm_control checks alldb2 ping [last Change:2014/04/18 00:29:01] OKdb2 MySQL [last Change:2014/04/18 00:29:01] OKDB2 rep_threads [last Change:2014/04/18 00:29:01] OKdb2 rep_backlog [Last change:2014/04/ 00:29:01] ok:backlog is nulldb3 ping [last change:2014/04/18 00:29:01] OKdb3 MySQL [last Change:2014/04/18 00:29:01] OKDB3 rep_threads [last Change:2014/04/18 00:29:01] OKdb3 rep_backlog [Last change:2014/ 04/18 00:29:01] ok:backlog is nulldb1 ping [last Change:2014/04/18 00:29:01] OKdb1 MySQL [last Change:2014/04/18 00:29:01] OKDB1 rep_threads [last Change:2014/04/18 00:29:01] OKdb1 rep_backlog [Last change:2014/ 04/18 00:29:01]
(9) Check the cluster environment online status on the monitor host:
[Email protected] ~]# Mmm_control show db1 (192.168.0.60) master/online. Roles:writer (192.168.0.108) DB2 (192.168.0.50) master/online. Roles:reader (192.168.0.88)
(Ten) online (online) all hosts:
I am here the host is already online, if not online, you can use the following command to the relevant host online
The prompt host is already online and has skipped the command execution.
Here the entire cluster is configured to complete. From the output can be seen that the virtual IP 192.168.0.108 has been successfully added to the host 192.168.0.60 for the main external write service, virtual IP 192.168.0.88 added to host 192.168.0.50 read services on the external, and virtual IP 192.168.0.98 is added to the 192.168.0.40 to provide read services on the outside.
MMM high-availability test
We have completed the construction of high-availability environment, we can do mmm ha test. First look at the status of the entire cluster, you can see the whole cluster status is normal.
[Email protected] ~]# Mmm_control show db1 (192.168.0.60) master/online. Roles:writer (192.168.0.108) DB2 (192.168.0.50) master/online. Roles:reader (192.168.0.88)
Simulate DB2 (192.168.0.50) downtime, manually stop the MySQL service, and observe the monitor log:
[Email protected] ~]# tail-f/var/log/mysql-mmm/mmm_mond.log 2014/04/18 00:55:53 FATAL State of the host ' DB2 ' changed from ONLINE to Hard_offline (Ping:ok, Mysql:not OK)
DB2 status from log found online converted to Hard_offline
Re-view the latest status of the cluster:
[Email protected] ~]# Mmm_control show db1 (192.168.0.60) master/online. Roles:writer (192.168.0.108) DB2 (192.168.0.50) master/hard_offline. Roles:
Restart DB2, you can see DB2 from Hard_offline to Awaiting_recovery. Here DB2 again takes over the read request.
[Email protected] ~]# Mmm_control show db1 (192.168.0.60) master/online. Roles:writer (192.168.0.108) DB2 (192.168.0.50) master/online. Roles:reader (192.168.0.88)
Simulating DB1 Main Library outage:
To view cluster status:
[Email protected] ~]# Mmm_control show db1 (192.168.0.60) master/hard_offline. Roles: DB2 (192.168.0.50) master/online. Roles:reader (192.168.0.88), writer (192.168.0.108)
To view MMM logs:
[Email protected] ~]# tail-f/var/log/mysql-mmm/mmm_mond.log 2014/04/18 01:09:20 FATAL State of the host ' DB1 ' changed from ONLINE to Hard_offline (Ping:ok, Mysql:not OK)
From the above can be found, db1 from the previous online conversion to Hard_offline, removed the write role, because DB2 is an alternative master, so took over the write role, Db3 point to the new Main library DB2, it should be said DB3 actually found the DB2 of the location of SQL, that DB2 show Master returns the value, and then change the master to DB2 directly on the DB3.
Db1,db2,db3 between a master and two from the replication relationship, once the occurrence of db2,db3 delay in db1, this moment db1 MySQL down, DB3 will wait for the data to catch up DB1, and then re-point to the new main DB2, change master to DB2 operation, In the process of db1 down, once the DB2 is behind DB1, when the switch occurs, DB2 becomes a writable state, the data consistency will not be guaranteed.
Summarize:
MMM is not suitable for environments that require high data consistency. But high availability is fully done.
Resources:
Http://mysql-mmm.org/mmm2:guide
The MMM of MySQL high-availability architecture