Dual-master replication architecture and MMM_MySQL

Source: Internet
Author: User
Dual-master replication architecture and MMM Introduction

Common replication architectures in MySQL include "one master, one slave", "one master, multiple slaves", "Two Masters", "multi-level replication", and "multi-master annular mechanism, see;

The most common and flexible data replication architecture is required, which can meet various requirements, such:

  • Use different slave databases for different roles (for example, adding different indexes or using different storage engines );

  • Using a slave database as the master database to be used, there is no data transmission except for replication;

  • Place a slave database in a remote data center for disaster recovery;

  • Delay one or more slave databases for disaster recovery;

  • Use one of the slave databases as backup, training, development, or test servers;

The "dual-master" replication architecture is used in special scenarios, such as two offices in different geographical locations, and both require a writable copy of data;

The biggest problem with this architecture is how to resolve data conflicts and inconsistencies, especially when the two servers modify the same row of records at the same time, or insert data into a table containing the auto_increment column on both servers;

By setting a server as a read-only passive server, you can avoid data write conflicts, in this active-passive mode, the master-master replication architecture makes it easy to switch between active and passive servers repeatedly, you can perform maintenance, table optimization, operating system upgrade, or other tasks without shutting down the server;

General process for configuring Active-Passive mode master-master replication architecture:

  • Ensure that the two servers have the same data;

  • Enable binary logs, select a unique server ID, and create a replication account;

  • Enable the log record of slave database updates, which is the key to failover and fault recovery;

  • Configure the passive server as read-only to prevent conflicts with updates on the active server;

  • Start the MySQL instance of each server;

  • Set each master database as the standby database of the other database and start work with the newly created binary log;

In addition, to eliminate single point of failure (SPOF) at sites in different geographic locations, redundancy can be added to each master database, that is, a slave database is added to each master database;

MMM (= Master-Master Replication Manager for MySQL) is a set of scripts used to monitor and manage the dual-Master Replication architecture. by setting a writable VIP and multiple read-only VIPs, complete automatic failover, read load balancing, and other functions;

Architecture Design

Server planning

Virtual IP planning

Configure and deploy

Dual-master replication architecture deployment

For installation and initialization of MySQL or MariaDB, refer to the blog "MySQL first-Architecture-installation-initialization-connection-management tools-data files"

Use mysqld_multi to start multiple mysqld instances on one host

Database initialization

# Cd/usr/local/mysqlscripts/mysql_install_db -- user = mysql -- datadir =/data/mariadb_data_3406/scripts/mysql_install_db -- user = mysql -- datadir =/data /mariadb_data_3506/

Database configuration

# Vi/etc/my on host Host1. cnf [mysqld_multi] mysqld =/usr/local/mysql/bin/mysqld_safemysqladmin =/usr/local/mysql/bin/mysqladmin [mysqld1] port = 3406 socket =/tmp/mysql3406.sockskip- external-lockingkey_buffer_size = bytes = 1Mtable_open_cache = bytes = 8query_cache_size = bytes = 2 datadir =/data/bytes = 1default_storage_engine = InnoDBlog-bin = mysql-binrelay-log = /data/relaylogs_3406/relay-bin # specify the relay log path log_slave_updates = 1 # enable the function of writing binary logs from the database update operation auto_increment_increment = 2 # Step auto_increment_offset = 1 # start value of the self-increasing field in dual-master replication, 1sync_binlog = 1 # ensure that transaction logs are written to the disk file binlog_format = rowserver-id = 11 # note the uniqueness of server-id [mysqld2] port = 3506 socket =/tmp/mysql3506.sockskip -external-lockingkey_buffer_size = protocol = 1Mtable_open_cache = protocol = 8query_cache_size = protocol = 2 datadir =/data/protocol = 1default_storage_engine = InnoDBlog-bin = mysql-binrelay-log =/data/relaylogs_3506/relay-binlog_slave_updates = 1sync_binlog = 1binlog_format = rowserver-id = 12 # vi/etc/my on host Host2. cnf [mysqld_multi] mysqld =/usr/local/mysql/bin/mysqld_safemysqladmin =/usr/local/mysql/bin/mysqladmin [mysqld1] port = 3406 socket =/tmp/mysql3406.sockskip- external-lockingkey_buffer_size = bytes = 1Mtable_open_cache = bytes = 8query_cache_size = bytes = 2 datadir =/data/bytes = 1default_storage_engine = InnoDBlog-bin = mysql-binrelay-log = /data/relaylogs_3406/relay-binlog_slave_updates = 1auto_increment_increment = 2 # Step auto_increment_offset = 2 # start value of the self-growth field in the dual-master replication, this is 2sync_binlog = 1binlog_format = rowserver-id = 21 [mysqld2] port = 3506 socket =/tmp/mysql3506.sockskip-external-lockingkey_buffer_size = protocol = 1Mtable_open_cache = protocol = 8query_cache_size = bytes = 2 datadir =/data/logs = bytes = InnoDBlog-bin = mysql-binrelay-log =/data/relaylogs_3506/relay-binlog_slave_updates = 1sync_binlog = 1binlog_format = rowserver-id = 22

Start a database instance

# On host Host1 and Host2/etc/init. d/mysqld_multi start 1 # The/etc/init operation is to stop the service. d/mysqld_multi stop 1/etc/init. d/mysqld_multi start 2 # The/etc/init operation is to stop the service. d/mysqld_multi stop 2

Log on to the database

# Mysql-S/tmp/mysql3406.sock # log on to master1 or master2mysql-S/tmp/mysql3506.sock # log on to slave1 or slave2 on the host Host1 and Host2

Create account(On the Master1 instance)

Grant replication client on *. * to '3 M _ moni' @ '192. 168.0.% 'identified by '3M _ 12345 '; # Create an MMM monitoring account grant super, replication client, process on *. * to '3 M _ agen' @ '192. 168.0.% 'identified by '3 M _ 12345 '; # Create an MMM proxy account grant replication slave on *. * to '3 M _ repl '@' 192. 168.0.% 'identified by '3 M _ 12345 '; # Create a duplicate account

Configure Data Synchronization

# Before connecting the slave database to the master database, you must first query the binary log file of the corresponding master database and its event location, that is, execute show master status on the master database, the master_log_file and master_log_pos parameters for database connection are determined accordingly; # change master to master_host = '2017. 168.0.45 ', master_port = 3406, master_user = '3 M _ repl', master_password = '3 M _ 12345 ', master_log_file = 'MySQL-bin.000001', master_log_pos = 2448; # change master to master_host = '2017. 168.0.45 ', master_port = 3406, master_user = '3 M _ repl', master_password = '3 M _ 12345 ', master_log_file = 'MySQL-bin.000002', master_log_pos = 365; # change master to master_host = '2017. 168.0.46 ', master_port = 3406, master_user = '3 M _ repl', master_password = '3 M _ 12345 ', master_log_file = 'MySQL-bin.000004', master_log_pos = 342; # change master to master_host = '2017. 168.0.46 ', master_port = 3406, master_user = '3 M _ repl', master_password = '3 M _ 12345 ', master_log_file = 'MySQL-bin.000004', master_log_pos = 342;

View synchronization status

# Check MariaDB [(none)]> show slave status/G ***************************** 1. row ************************** Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.0.45Master _ User: 3m_replMaster_Port: 3406 Connect_Retry: 60Master_Log_File: mysql-bin.000005 Read_Master_Log_Pos: Runtime: relay-bin.000010 Relay_Log_Pos: Runtime: mysql-bin.000005 Slave_IO_Running: Runtime: Yes Master_Server_Id: 11

MMM installation and deployment

On Host1 host: deploy agent and monitor

Yum-y install mysql-mmm-* # configure public settings vi/etc/mysql-mmm/mmm_common.confactive_master_rolewriter
 
  
Cluster_interface eth0pid_path/var/run/mysql-mmm/Alibaba/usr/libexec/mysql-mmm/replication_user3m_repl # Copy account principal # Copy account password agent_user3m_agen # agent account principal # agent account password
 
 
  
Ip192.168.0.45mysql _ port 3406 # you can specify the port of mysqld to be connected. modemasterpeerdb2 # peer indicates that db1 and db2 are of the same level.
 
 
  
Ip192.168.0.46mysql _ port 3406modemasterpeerdb1
 
 
  
Ip192.168.0.45mysql _ port 3506 modeslave
 
 
  
Ip192.168.0.46mysql _ port 3506 modeslave
 
 
  
Hosts db1, db2ips 192.168.0.11 # Only one modeexclusive can be configured for the writable VIP #
 
 
  
Hosts db1, db2, db3, db4ips 192.168.0.12, 192.168.0.13, 192.168.0.14, 192.168.0.15 # Multiple modebalanced read-only VIPs can be configured #
 =========== Scp mmm_common.conf 192.168.0.46: /etc/mysql-mmm/# Copy the public configuration file to another host ==========## configure monitoring settings vi/etc/mysql-mmm/mmm_mon.confinclude mmm_common.conf
 
  
Ip127.0.0.1pid _ path/var/run/mysql-mmm/mmm_mond.pidbin_path/usr/libexec/mysql-mmmstatus_path/var/lib/mysql-mmm/mmm_mond.statusping_ips192.168.0.45, 192.168.0.46 # host IP address to be pinged during health monitoring, not VIP.
 
 
  
Monitor_user3m_moni # monitor account monitor_password3m_12345 # monitor account password
 Debug 0 # configure the agent to set vi/etc/mysql-mmm/mmm_agent.confinclude mmm_common.confthis db1 # because two mysqld instances are enabled on one host, you can configure two this parameters.

Host2 host: only agent deployment is required.

Yum-y install mysql-mmm-agent # configure agent settings vi/etc/mysql-mmm/mmm_agent.confinclude mmm_common.confthis db2this db4

Service startup

# Service mysql-mmm-agent start on Host2 # start agent # service mysql-mmm-agent startservice mysql-mmm-monitor start on Host1 # start monitoring program

Test and verification

View MMM-based status information in the dual-master replication architecture:

# [Root @ mysql-mmm] # mmm_control showdb1 (192.168.0.45) master/ONLINE on host host1. roles: reader (192.168.0.14), writer (192.168.0.11) db2 (192.168.0.46) master/ONLINE. roles: reader (192.168.0.13) db3 (192.168.0.45) slave/ONLINE. roles: reader (192.168.0.15) db4 (192.168.0.46) slave/ONLINE. roles: reader (192.168.0.12)

Manual health monitoring for each node

# [Root @ mysql-mmm] # mmm_control checksdb4ping [last change: 22:38:27] OKdb4mysql [last change: 2014/05/06 22:38:27] OKdb4rep_threads [last change: 22:38:27] failed [last change: 2014/05/06 22:38:27] OK: Backlog is nulldb2ping [last change: 2014/05/06 22:38:27] OKdb2mysql [last change: 2014/05/06 22:38:27] OKdb2rep_threads [last change: 22:38:27] OKdb2rep_backlog [last change: 2014/05/06 22:38:27] OK: Backlog is nulldb3ping [last change: 2014/05/06 22:38:27] OKdb3mysql [last change: 2014/05/06 22:38:27] last [last change: 22:38:27] failed [last change: 22:38:27] OK: Backlog is nulldb1ping [last change: 22:38:27] OKdb1mysql [last change: 22:38:27] failed [last change:] failed [last change: 22:38:27] OKdb1rep_backlog [last change: 22:38:27] OK: Backlog is null

Additional instructions

  • In this demonstration case, if the front-end program needs to communicate with MySQL, the write database must be connected to 192.168.0.11: 3406. the read database can be connected to one or more of 192.168.0.12-15, ports may be 3406 or 3506;

  • When the read-only VIP is drifting, the mysqld port connected by the front-end program changes. Therefore, it is recommended to use port 3306 in the production environment;

  • After using MMM to implement automatic failover in the dual-master replication architecture, mysql does not directly communicate with the front-end program. it also needs to work with read/write separators (such as Ameoba) to unify the external connection addresses, the read/write splitter is responsible for downward distribution of read/write;

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.