High availability for MySQL based on MMM

Source: Internet
Author: User
Tags db2

I. Pre-trial planning:

Lab Environment: CentOS-6.5

Database: Mysql-5.6.19

Virtual machines: VMware Workstation 10


Network topology structure:

Three nodes are not master1,master2,slave. where Master1 and Master2 do a dual master copy of MySQL, slave nodes are based on Master1 from replication.

Due to the limitations of the nodes we also use the slave node as the monitoring host.


IP Address Planning:

master1:10.43.2.81 10.43.2.99 as a node that is provided to the application to connect to read


master2:10.43.2.93 10.43.2.100 as a node that is provided to the application to connect to read


slave:10.43.2.93 10.43.2.101 as a writable node for providing application connectivity


Partitioning of permissions:

Master1 and master are mainly from each other to establish a copy user Repl password in these two repl

Slave through the above established replication users and Master1 from replication, here because is the test environment in order to facilitate operation so will use the same replication user information, in the production environment should avoid this problem.

Two. mysql-related configuration

Install MySQL on three points this installation can access the information on its own

1.master1 and Master2 do double-master replication:

Modify the Master1 configuration file as follows:

[Mysqld]character-set-server=utf8server-id = 1datadir =/mydata/datalog-bin =/mydata/binglogs/master-binrelay_log = /mydata/relaylogs/relaybinlog_format=mixedthread_concurrency = 4log-slave-updatessync_binlog=1auto_increment_ Increment=2auto_increment-offset=1sql_mode=no_engine_substitution,strict_trans_tables[client] Default-character-set=utf8

MySQL into Master1 to master2 a user that can be used for replication: repl Password: repl

The same MySQL that goes into Master2 gives Master1 a user that can be used for replication: repl Password: repl

Mysql> Grant replication Slave,replication Client on * * to ' repl ' @ '% ' identified by ' repl ' mysql> flush Privileg Es

here with% means that you can log in remotely with any host to copy master data with REPL users; Should be avoided in the production environment .

2.master1:

Mysql> Show master status;

+-------------------+----------+--------------+------------------+-------------------+

| File | Position | binlog_do_db | binlog_ignore_db | Executed_gtid_set |

+-------------------+----------+--------------+------------------+-------------------+

|      master-bin.000001 |              663 |                  |                   | |

+-------------------+----------+--------------+------------------+-------------------+

1 row in Set (0.00 sec)

3. Modify the Master2 configuration file:

[Mysqld]character-set-server=utf8server-id = 3//mysql replication should keep this parameter unique DataDir =/mydata/datalog-bin = /mydata/binglogs/master-binrelay_log =/mydata/relaylogs/relaybinlog_format=mixedthread_concurrency = 4log-slave-updatessync_binlog=1sql_mode=no_engine_substitution,strict_trans_tables[client] Default-character-set=utf8

4.master2:


Mysql> Show master status;

+-------------------+----------+--------------+------------------+-------------------+

| File | Position | binlog_do_db | binlog_ignore_db | Executed_gtid_set |

+-------------------+----------+--------------+------------------+-------------------+

|      master-bin.000001 |              663 |                  |                   | |

+-------------------+----------+--------------+------------------+-------------------+

1 row in Set (0.01 sec)

5.master2 Connection Master1


Change Master to master_host= ' 10.43.2.81 ', master_user= ' repl ', master_password= ' repl '// Operations in the production environment need to indicate the primary binary log file and start point where the replication begins, where the binary log is completely default, so that it is copied from scratch, due to less data

Start slave; Show slave status\g slave_io_running:yes slave_sql_running:yes seconds_behind_master:0

Observe the values of these three parameters as shown above to indicate that replication is normal

6. Same Master1 connection Master2

Change Master to master_host= ' 10.43.2.93 ', master_user= ' repl ', master_password= ' repl '

Start slave; Show slave status\g slave_io_running:yes slave_sql_running:yes seconds_behind_master:0

Observe the values of these three parameters as shown above to indicate that replication is normal

Configuration file for 7.slave:

[Mysqld]character-set-server=utf8server-id = 3datadir =/mydata/datarelay_log =/mydata/relaylogs/relaybinlog_format =mixedthread_concurrency = 4log-slave-updatessync_binlog=1sql_mode=no_engine_substitution,strict_trans_tables[ Client]default-character-set=utf8

Slave does not need to open the binary log, only need to turn on the trunk log.

8.slave Connection on Master1

Change Master to master_host= ' 10.43.2.81 ', master_user= ' repl ', master_password= ' repl '

Start slave; Show slave status\g slave_io_running:yes slave_sql_running:yes seconds_behind_master:0

9. Set up the data on the Master2 to test:

Create a database on Master2 Sanhong

Create Database Sanhong;

Perform the show master status on Master1

The following results were found:

Mysql> Show databases;+--------------------+| Database |+--------------------+| Information_schema | | MySQL | | Performance_schema | | Sanhong | | Test |+--------------------+5 rows in Set (0.32 sec)

Sanhong appears to indicate that replication is normal;

10. Perform the show master status on Slave

The following results were found:

Mysql> Show databases;+--------------------+| Database |+--------------------+| Information_schema | | MySQL | | Performance_schema | | Sanhong | | Test |+--------------------+5 rows in Set (0.32 sec)

Sanhong appears to indicate that replication is normal;

Three. Highly Available related configurations

The main functions of MMM are provided by the following three scripts

Mmm_mond is responsible for all monitoring processes, determining the removal of nodes, etc.

Mmm_agentd Run the agent daemon on the MySQL server, provided by a simple remote service set to the monitoring node by default listening on TCP port 9989

Mmm_control through the command line management Mmm_mond process default listener on TCP port 9988

    1. Installation configuration mysql-mmm:


      First download the Epel source (corresponding to the version of your operating system CentoOS6.4) (three nodes installed simultaneously)


      wget http://mirrors.yun-idc.com/epel/6/i386/epel-release-6-8.noarch.rpm

      Installing the Epel source


      Yum Install-y epel-release-6-8.noarch.rpm


      Install Mysql-mmm-agent (three nodes installed simultaneously)


      Yum-y Install Mysql-mmm-agent

    2. Edit mysql_common.conf (three nodes need to be edited and copied to three nodes)


active_master_role      writer

Modify the mmm_agent.conf configuration file on each node

Include mmm_common.conf# the ' this ' variable refers to this server. Proper operation requires # that's ' this ' server (db1 by default), as-well as all other servers, with the # Proper IP addres SES set in mmm_common.conf.this db3//Ensure that this name is the name of the corresponding node, for example, for Master1, this should be changed to DB1 (corresponding mmm_common.conf)

3. We will use the slave as monitor on the top of the package required to install monitoring

Yum Install mysql-mmm*-y


Edit MMM_MON.COF

Vim /etc/mysql-mmm/mmm_mon.confinclude mmm_common.conf<monitor>    ip                    127.0.0.1    pid_path             /var/run/mysql-mmm/mmm_mond.pid    bin_path             /usr/libexec/mysql-mmm    status_path          /var/lib/mysql-mmm/mmm_mond.status    ping_ips             10.43.2.81,10.43.2.83,10.43.2.93     auto_set_online     60    # the kill_ host_bin does not exist by default, though the monitor will     # throw a warning about it missing.  see the section 5.10  "Kill  host    # functionality " in the PDF documentation.     #    # kill_host_bin     /usr/libexec/ mysql-mmm/monitor/kill_host    #</monitor>


4. Start MMM to test:

All three nodes need to be started;

[[Email protected] mysql-mmm]# service mysql-mmm-agent startstarting mmm agent Daemon: [ OK]


Monitoring host node start monitoring service:

[Email protected] mysql-mmm]# service mysql-mmm-monitor start starting MMM monitor Daemon: [OK]


View the status of each node database on the monitoring host:

[Email protected] mysql-mmm]# Mmm_control show DB1 (10.43.2.81) master/online. Roles:writer (10.43.2.101) DB2 (10.43.2.93) master/online. Roles:reader (10.43.2.99) db3 (10.43.2.83) slave/online. Roles:reader (10.43.2.100)


The results match our plan above, when we stop a database

[Email protected] mysql-mmm]# Mmm_control set_offline db1ok:state of ' db1 ' changed to Admin_offline. Now you can wait some time and check all roles! [Email protected] mysql-mmm]# Mmm_control show DB1 (10.43.2.81) master/admin_offline. Roles://DB1 At this time has been downline VIP has flowed to Master2 namely DB2 DB2 (10.43.2.93) master/online. Roles:reader (10.43.2.99), writer (10.43.2.101) db3 (10.43.2.83) slave/online. Roles:reader (10.43.2.100)

At this point we set up a database on Master2 ' Jin ' to observe the situation of slave

master2:mysql> Create Database Jin; Query OK, 1 row affected (0.02 sec) slave:mysql> Show databases;+--------------------+| Database |+--------------------+| Information_schema | | Jin | | MySQL | | Performance_schema | | Sanhong | | Test |+--------------------+6 rows in Set (0.00 sec)

The appearance of ' Jin ' shows that although slave and Master1 do the master-slave but when Master1 offline slave automatically synchronizes master2 data.


Four: summary

After the above steps, the implementation of MySQL high-availability based on MMM is simple. Also found that MMM is better than keepalive place.

    1. MMM not only can monitor the running state of two master nodes, but also can monitor the running state of multiple slave nodes, and any problem of node will automatically switch the virtual IP of the failed node to other healthy nodes to maintain the continuity and high availability of the read and write service.

    2. Mmm not only can provide virtual IP automatic transfer function, more importantly, if the active master node fails, will automatically move the back end of the multiple slave nodes to the standby master node to continue the synchronous replication, the whole process does not need to manually change the configuration of synchronous replication, This is a feature that is not available in all other MySQL high-availability cluster scenarios.

In fact, we put the master1 of the MySQL process will be able to achieve VIP will flow to Master2, here no longer demo.

This article is from the Linux OPS blog, so be sure to keep this source http://germanygu.blog.51cto.com/3574209/1637119

High availability for MySQL based on MMM

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.