In this article, I demonstrate how to configure the MySQL group replication multi-Master model (multi-primary). In the configuration, the group replication and the single master model of the multi-master model are basically no different.
This article only for the building and maintenance of multi-master model group copy of small bricks, if the terms and theories involved in the question, can see:
- Large text for single-master model-related content: Configure group replication for single-master models.
- Group copy theory: MySQL group copy official manual translation.
Using group replication technology, it is important to understand its requirements and limitations. See: Requirements and limitations of group replication.
1. Group replication: Single and multiple master models
MySQL group replication supports both single-master and multi-master models, all of which guarantee high availability of MySQL databases.
- Single Master model:
- There is only one primary node, which is responsible for all writes, and the other node provides the read service as the slave node (which is automatically set to Read-only).
- In the primary node failure, the single master model automatically elects the new master node. After the election, the remaining nodes point to that node. However, the client will still have some of the requests routed to the failed master node, so there is a need to find a way to resolve the problem. This is not the problem that MySQL should consider, but the problem that the client application, database middleware (common: Proxysql, MySQL Router, Mycat, Amoeba, Cobar, etc.) should solve.
- As long as the involuntary off-group failure Node (voluntary, involuntary off-group, see Configuring a single master model for group replication) does not exceed the majority, group replication is not blocked.
- Under the multi-master model:
- There is no concept of master and slave. All nodes can read and write data.
- Because all nodes can provide read and write services, performance is better than a single master model.
- Configuring a multi-master model works more than a single-master model.
- After a node involuntary failure, there is no effect on group replication except that it affects a bit of performance. Unless there are too many failed nodes, the remaining online nodes do not reach the "majority" requirement.
2. Differences between single and multi-master model configuration files
The following are the configuration files for single-master model group replication:
[mysqld]datadir=/datasocket=/data/mysql.sockserver-id=100 # 必须gtid_mode=on # 必须enforce_gtid_consistency=on # 必须log-bin=/data/master-bin # 必须binlog_format=row # 必须binlog_checksum=none # 必须master_info_repository=TABLE # 必须relay_log_info_repository=TABLE # 必须relay_log=/data/relay-log # 必须,如果不给,将采用默认值log_slave_updates=ON # 必须sync-binlog=1 # 建议log-error=/data/error.logpid-file=/data/mysqld.pidtransaction_write_set_extraction=XXHASH64 # 必须loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" # 必须loose-group_replication_start_on_boot=off # 建议设置为OFFloose-group_replication_member_weigth = 40 # 非必需,mysql 5.7.20才开始支持该选项loose-group_replication_local_address="ABCDEFGHIJK" # 必须,下一行也必须loose-group_replication_group_seeds="abcdefg"
I did a very detailed explanation of what each line meant, and I did it in a single master model group copy.
The configuration files for the multi-master model and the single-master model are basically the same, except that they need to include:
group_replication_enforce_update_everywhere_checks=ON # 非必需,但强烈建议group_replication_single_primary_mode=OFF # 必须,表示关闭单主模型,即使用多主
The weight line needs to be commented because there is no master concept under the multi-master model, so there is no need for an election weight value.
# loose-group_replication_member_weigth = 40
In addition, the repeatable read
transaction isolation level is recommended and read committed
cannot be set to a serializable
level (mandatory) unless the business relies on the default. So, if allowed, you can also add:
3. Configuring a multi-master model
This article intends to configure a multi-master model replication group of 5 nodes.
Details of the specific environment are as follows:
Node name |
System Version |
MySQL version |
Client Interface (eth0) |
Intra-Group Communication Interface (ETH0) |
Data Status |
S1 |
CentOS 7 |
MySQL 5.7.22 |
192.168.100.21 |
192.168.100.21 |
New instances |
S2 |
CentOS 7 |
MySQL 5.7.22 |
192.168.100.22 |
192.168.100.22 |
New instances |
S3 |
CentOS 7 |
MySQL 5.7.22 |
192.168.100.23 |
192.168.100.23 |
New instances |
S4 |
CentOS 7 |
MySQL 5.7.22 |
192.168.100.24 |
192.168.100.24 |
New instances |
S5 |
CentOS 7 |
MySQL 5.7.22 |
192.168.100.25 |
192.168.100.25 |
New instances |
Each node provides an external MySQL service and intra-group communication uses the same interface.
1. Modify the host name to add DNS resolution.
Because each node in the group uses the host name to resolve the address of the other members, the host name must be configured and the host name can be resolved correctly for each node.
Execute on S1:
hostnamectl set-hostname s1.longshuai.comhostnamectl -H 192.168.100.22 set-hostname s2.longshuai.comhostnamectl -H 192.168.100.23 set-hostname s3.longshuai.comhostnamectl -H 192.168.100.24 set-hostname s4.longshuai.comhostnamectl -H 192.168.100.25 set-hostname s5.longshuai.comcat >>/etc/hosts<<eof 192.168.100.21 s1.longshuai.com 192.168.100.22 s2.longshuai.com 192.168.100.23 s3.longshuai.com 192.168.100.24 s4.longshuai.com 192.168.100.25 s5.longshuai.comeofscp /etc/hosts 192.168.100.22:/etcscp /etc/hosts 192.168.100.23:/etcscp /etc/hosts 192.168.100.24:/etcscp /etc/hosts 192.168.100.25:/etc
2. Provide the configuration file.
The following is the S1 node configuration file.
[Mysqld]datadir=/datasocket=/data/mysql.sockserver-id=100 # must gtid_mode=on # must be Enforce_gtid_consistency=on # must Log-bin=/data/master-bin # must be Binlog_format=row # must be Binlo G_checksum=none # must master_info_repository=table # must be relay_log_info_repository=table # must be relay_log= /data/relay-log # must, if not given, will take the default value Log_slave_updates=on # must be sync-binlog=1 # recommended log -error=/data/error.logpid-file=/data/mysqld.pidtx_isolation = ' read-committed ' # suggested items transaction_write_set_ EXTRACTION=XXHASH64 # must loose-group_replication_group_name= "AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA" # must Loose-group _replication_enforce_update_everywhere_checks=on # not required, but strongly recommended Loose-group_replication_single_primary_mode=off # must be, Close the single master model, that is, use the multi-master Loose-group_replication_start_on_boot=off # Recommendation set to Offloose-group_replication_local_address= " 192.168.100.21:20001 "# must be # The next line must also, here I add all nodes to the SEED node list LOose-group_replication_group_seeds= " 192.168.100.21:20001,192.168.100.22:20002,192.168.100.23:20003,192.168.100.24:20004,192.168.100.25:20005 "
The configuration files for the S2, S3, S4, and S5 nodes are similar to S1, but server_id
loose-group_replication_local_address
must be changed to the corresponding values for each node.
S2 configuration (not including the same configuration as S1):
server_id=110loose-group_replication_local_address="192.168.100.22:20002"
S3 configuration (not including the same configuration as S1):
server_id=120loose-group_replication_local_address="192.168.100.23:20003"
S4 configuration (not including the same configuration as S1):
server_id=130loose-group_replication_local_address="192.168.100.24:20004"
S5 configuration (not including the same configuration as S1):
server_id=140loose-group_replication_local_address="192.168.100.25:20005"
After the configuration is complete, launch the Mysqld instance.
systemctl start mysqld
3. Create the replication user and set the recovery channel "Group_replication_recovery".
I'm going to use S1 as the first node within a group. So just create a replication user on S1, and when the other nodes join the group, the operation will be copied away.
Execute on S1:
mysql> create user [email protected]'192.168.100.%' identified by '[email protected]!';mysql> grant replication slave on *.* to [email protected]'192.168.100.%';
To set the asynchronous replication channel for the recovery phase:
Execute on S1:
mysql> change master to master_user='repl', master_password='[email protected]!' for channel 'group_replication_recovery';
Note: In later operations, if not explicitly specified on S2, S3, S4, and S5, all are performed on S1. Some operations are not allowed on multiple nodes.
4. Install the group replication plug-in on S1 and boot to create the replication group.
Install the group replication plug-in, performed on S1:
mysql> install plugin group_replication soname 'group_replication.so';
Take the boot node of the S1 node group and execute it on the S1:
mysql> set @@global.group_replication_bootstrap_group=on;mysql> start group_replication;mysql> set @@global.group_replication_bootstrap_group=off;
After executing the above statement, the replication group required for this experiment has been created by node S1. Later, S2-S5 nodes can be added to the group in succession.
Before adding a group to another node, look at whether the node in the group S1 is online.
mysql> select * from performance_schema.replication_group_members\G*************************** 1. row ***************************CHANNEL_NAME: group_replication_applier MEMBER_ID: a659234f-6aea-11e8-a361-000c29ed4cf4 MEMBER_HOST: s1.longshuai.com MEMBER_PORT: 3306MEMBER_STATE: ONLINE
5. Add a new node to the group: S2, S3, S4, S5.
Performed on S2, S3, S4, and S5:
change master to master_user='repl', master_password='[email protected]!' for channel 'group_replication_recovery';install plugin group_replication soname 'group_replication.so';
Then execute the following statement on S2, S3, S4, and S5 to turn on the group replication feature, which will be automatically added to the group when it is turned on. Note, however, that to do this sequentially, after each start statement returns successfully, go to the next node to execute:
start group_replication;
6. See if the members S1, S2, S3, S4, and S5 in the group are all online.
Execute on any of the nodes:
Mysql> SELECT * from performance_schema.replication_group_members\g*************************** 1. Row ***************************channel_name:group_replication_applier member_id: 22E55DB0-7604-11E8-B72D-000C29B06C3C MEMBER_HOST:s5.longshuai.com member_port:3306member_state:online*********** 2. Row ***************************channel_name:group_replication_applier member_id: a5165443-6aec-11e8-a8f6-000c29827955 MEMBER_HOST:s2.longshuai.com member_port:3306member_state:online*********** 3. Row ***************************channel_name:group_replication_applier member_id: A659234F-6AEA-11E8-A361-000C29ED4CF4 MEMBER_HOST:s1.longshuai.com member_port:3306member_state:online*********** 4. Row ***************************channel_name:group_replication_applier member_id: BA505889-6AEC-11E8-A864-000C29B0BEC4 MEMBER_HOST:s3.longshuai.com member_port:3306member_state:online*********** 5. Row *******************Channel_name:group_replication_applier member_id:bf12fe97-6aec-11e8-a909-000c29e55287 MEMBER_HOST: s4.longshuai.com member_port:3306member_state:online5 rows in Set (0.00 sec)
4. Test the write load of the multi-master model
Under the multi-master model, all nodes can read and write operations. Note, however, that there are several requirements for group replication: The table must be a InnoDB table (although creating a MyISAM table does not give an error, but the data will be corrected), each table must have a primary key, a foreign key that cannot be cascaded, and so on.
Perform the following write operation on any node to test:
create database mut_gr;create table mut_gr.t1(id int primary key);insert into mut_gr.t1 values(1),(2),(3),(4);
Continue the write operation on any node:
insert into mut_gr.t1 values(5);
See if the data has been synced to each node.
5. More operations for Group replication maintenance
About the group replication more maintenance operations, such as how to restart the group, how to safely retire the group, how to re-add the group and so on, or refer to the single-master model of the group replication, their maintenance is similar, so this article does not repeat the content.
MySQL group replication (4): Configuring group replication for multi-master models