MySQL Group replication

Source: Internet
Author: User
Tags joins

MYSQ based on the native replication and Paxos protocol Group replication technology, provides strong consistent data security assurance, can achieve high availability, disaster tolerance environment, multi-point reading and writing, debt balance, MySQL group replication (MGR) is a plugin of GA in mysql5.7.17:

650) this.width=650; "Src=" Http://s2.51cto.com/wyfs02/M01/8B/B0/wKiom1hVDcvQwbfNAABnrlLfqsk071.jpg-wh_500x0-wm_3 -wmp_4-s_3043468144.jpg "title=" 123.jpg "alt=" Wkiom1hvdcvqwbfnaabnrllfqsk071.jpg-wh_50 "/>



First, function support:

1. Data Consistency

2. Multi-Master topology, can read and write at any node

3. fault detection, automatically reject the fault node, automatically join the new node

Ii. Data Manipulation (DML)

1. Multi-point write conflicts

At the same time, the same conditions are updated at different nodes, according to the progress of the transaction to determine who committed the first to allow who to execute, the other to do rollback operations

2.The GRP can operate on any node in the Multi-primary mode, but when the transaction commits, all nodes are broadcast, with two points

Check if data is conflicting

Broadcast the transaction data to all nodes execution

III. structure Operations (DDL)

1,The DDL statement is the current read and write operation, does not support the atomicity of the transaction, cannot do the conflict check rollback and so on, so GRP support for DDL statements can only self-proofing consistency

Iv. Base of GRP replication

1,grp A newly added node (slave) from the current cluster randomly select a node (master) synchronization data, when Master sends Binlog to slave will cache the transaction operations during that time, while slave also listen to the changes in the data, Until the data delay is 0, the method is not different from the traditional master-slave, and is implemented using Binlog and Gtid, but is made by the Paxos protocol through a dedicated copy channel (Group_replication_recover,group_ Replication_applier), when master down slave automatically links to another node, shutting down the master-slave link of the down node, thus seeing that the GRP does not have a single point of issue, and does not require traditional link switching technology support

2, replication compression, when the amount of binlog data generated by a single transaction is greater than the set threshold (Group_replication_compression_threshold) GRP will enable the compression function, In the network IO bottleneck can effectively reduce the network occupied 30%-40%, but the CPU consumption will also increase, the function according to the actual situation to determine whether to use, the default function is more than 1M of data to enable compression, turn off just set this parameter to 0

3,GRP added view_id item in Binlog, a view_id only represents a view, a view corresponding to a node configuration, determine whether the node is normal in the RP, view_id changes only when the node joins or removes the RP, The composition of the view_id consists of a random ID and incrementing data (VIEW_ID=14816973865257164:35), random ID generated randomly when the node joins the RP, fixed, incrementing the number as long as the node is removed, added one at a time, view_ The ID is synchronized to the binlog of all nodes.

V. Requirements and limitations of GRP

1. Requirements:

The engine must be InnoDB, because transaction support is required for conflicting checks on each node at commit

Each table must have a key, which requires a comparison of primary key values for transaction conflict detection

Binlog must be turned on and is in row format

Turn on Gtid, and the master and slave status information is stored in the table (--master-info-repository=table 、--relay-log-info-repository=table),--log-slave-updates Open

Consistency detection Settings--transaction-write-set-extraction=xxhash64

2. Restrictions:

RP and normal replication Binlog checksum cannot coexist, you need to set--binlog-checksum=none

Gap lock is not supported and the isolation level needs to be set to read_committed

Does not support locking operations on tables (Lock/unlock table), does not send to other node execution, affecting the need to lock the table in the case of mysqldump full table backup recovery operation

Serializable isolation level is not supported

DDL statements do not support atomicity, cannot detect conflicts, and should be self-validating after execution is consistent

Foreign keys are not supported

Supports a maximum of 9 nodes

Six, fault detection and recovery

1.The GRP fault detection mechanism is unable to receive a send information during the detection cycle, it will initiate all nodes in the cluster decision, if the majority of nodes can not accept the information of a, that is, the fault of a node, the automatic culling from the cluster

2. when the cluster rejects the failed node, the existing node will be automatically reorganized into a new cluster, after the failure node recovery will not automatically join the cluster, you need to manually execute the command to join the cluster

VII. Installation and Deployment

1, Download the latest version of 5.7.17 initialization, I this test is to deploy 3 different ports on the same server service, SERVER_ID also need not the same, cluster node self-increased use server_id+7 way growth, so Server_ ID can only be set to single digit

2. configuration Files

Gtid_mode=on

Enforce_gtid_consistency=on

Master_info_repository=table

Relay_log_info_repository=table

Binlog_checksum=none

Log_slave_updates=on

Log_bin=binlog

Binlog_format=row

Transaction_write_set_extraction=xxhash64

Loose-group_replication_group_name= "AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA" #集群名称

Loose-group_replication_start_on_boot=off #是否随着服务启动集群

loose-group_replication_local_address= "127.0.0.1:3011" #集群本机端口, different from service port

loose-group_replication_group_seeds= "127.0.0.1:3013,127.0.0.1:3012,127.0.0.1:3011" #集群包含的所有节点

loose-group_replication_bootstrap_group= off #是否设置为主节点, when a cluster is created, the other nodes join the cluster as the target of that node

3. start three service to perform GRP installation

Install plugin group_replication soname "group_replication.so";

Set sql_log_bin=0; #不记录创建用户信息到binlog以防加入集群时冲突

Grant replication Slave on * * to ' rpl_test ' @ '% ' identified by ' rpl_test ';

Flush privileges;

Change Master to master_user= ' rpl_test ', master_password= ' rpl_test ' for Channel ' group_replication_recovery '; #GRP专用复制通道

Restarting the MySQL service enables the configuration file's GRP configuration information to take effect, and the RP plugin is not installed when it is initialized, and the configuration entries do not take effect

4. Creating GRP

Mysql–p 3001

msyql> set global group_replication_bootstrap_group=1;

msyql> start group_replication;

mysql> set global group_replication_bootstrap_group=0;

3002 and 3003 nodes execute the start group_replication directly; The premise must be completed in step 3.

5, such a single-master node of the simple cluster is completed, 5.7.17 in Performance_schema under a few new replication the beginning of the table records the status of the operation of the RP information, can be from the Replication_ Group_members table View current cluster node status, select * from Global_status where variable_name like "%group%" to see who is the current master node

Eight, Multi-primary mode:

1, the mode to enable the need to set two parameters

Group_replication_single_primary_mode=0 #这个参数很好理解 is to turn off single master mode

Group_replication_enforce_update_everywhere_checks=1 #这个参数设置检查对RP又影响的参数在各个节点是否一致

2, The default startup is the single master mode, the other nodes are set READ_ONLY, super_read_only These two parameters, need to modify the two configuration

3, The completion of the above configuration can perform multi-point write, multi-point write there will be a conflict check, which wear performance is very large, the official recommendation of the use of network partitioning function, the same business in the terminal to locate the same node, to minimize the probability of conflict occurrence

Nine, backup recovery

Percona for the GRP Backup tool after the test bar, you can also self-CP testing, need to record the corresponding Gtid

X. SUMMARY

Although the MRP adopts the principle of msyql native logic replication, but adopts the POAXS protocol control, increases the Binlog cache and compression function, caches the change of the time transaction when recovering from the node, and broadcasts to all nodes for consistency check when the transaction commits, which guarantees the security of the data synchronization. However, there is also a delay, although the data can be guaranteed all sent to all nodes, but not the same progress, providing automatic fault tolerance for high availability provides convenience, Relay_log consists of two files, recovery store the recovery data, applier storage of additional data, The plugin is only GA pending time validation.


This article is from the "D-Tone de silent" blog, please be sure to keep this source http://xiaozhong991.blog.51cto.com/2354914/1883620

MySQL Group replication

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.