1 , group replication server Settings
[Mysqld]
# Server Configuration
Datadir=<full_path_to_data>/data/s1
basedir=<full_path_to_bin>/mysql-8.0/
port=24801
Socket=<full_path_to_sock_dir>/s1.sock
Attention
The use of non-default port 24801 is because in this tutorial, three server instances use the same host name. This is not required in settings with three different machines.
2 , replication framework Configuration
Under [mysqld] in my.cnf, add the following information
Server_id=1
Gtid_mode=on
Enforce_gtid_consistency=on
Binlog_checksum=none
Log_bin=binlog
Log_slave_updates=on
Binlog_format=row
Master_info_repository=table
Relay_log_info_repository=table
3 , group replication settings
Transaction_write_set_extraction=xxhash64
Loose-group_replication_group_name= "AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA"
Loose-group_replication_start_on_boot=off
Loose-group_replication_local_address= "188.102.17.179:2525"
Loose-group_replication_group_seeds= "188.102.17.179:2525,188.102.17.180:2525,188.102.4.131:2525"
Loose-group_replication_bootstrap_group=off
Attention:
If the group replication plug-in is not loaded in the boot server fashion, the prefix group_replication used by the above variable loose-instructs the server to continue booting.
Configure Transaction_write_set_extraction
Instructs the server for each transaction, it must collect a write set and encode it as a hash using the XXHASH64 hash algorithm. Starting with MySQL 8.0.2, this setting is the default setting, so you can omit this line.
Configure Group_replication_group_name
Must be a valid UUID. This UUID is used internally when setting Gtid for group replication events in the binary log. Use the Select UUID () to generate a UUID.
Configure Group_replication_start_on_boot
Indicates that the plug-in does not start the operation automatically when the server starts. This is important when setting up group replication, because it ensures that you can configure the server before manually starting the plug-in.
After you configure a member, you can set Group_replication_start_on_boot to on to automatically start the group replication when the server boots.
Configure Group_replication_local_address
Tells the plugin that the native uses the network address 127.0.0.1 and port 24901 to communicate internally with other members of the group.
Important
Group replication uses this address for internal member-to-member connections using xcom. This address must be different from the host name and port used for SQL and should not be used for client applications.
When you run a group copy, it must be reserved for internal communication between members of the group.
Configured Network address group_replication_local_address
Must be resolvable by all group members. For example, if each server instance is located on another computer with a fixed network address, you can use the computer's IP, such as 10.0.0.1.
If you use a host name, you must use the fully qualified name and ensure that it resolves through DNS, properly configured/etc/hosts files, or other name resolution procedures.
The recommended port group_replication_local_address is 33061. In this tutorial, we use three server instances running on a single computer, so ports 24901 through 24903 are used for internal communication network addresses.
Configure Group_replication_group_seeds
Sets the host name and Port of the group member that the new member uses to establish a connection to the group. These members are called seed members. After the connection is established, group membership information Performance_schema.replication_group_members is listed.
Typically, the Group_replication_group_seeds list contains a list group_replication_local_address Hostname:port each group member, but this is not mandatory, You can select a subset of group members as a seed.
Important, the Hostname:port column in Group_replication_group_seeds is the internal network address of the seed component, which is configured by Group_replication_local_address,
Instead of SQL Hostname:port for client connections, and for example in the Display Performance_schema.replication_group_members table.
4 , specific deployment actions
1 ), create a copy account
(You can create an account before joining a group, so you don't have to turn the log off.)
SET sql_log_bin=0; (Not available)
CREATE USER [email protected] '% ' identified by ' password ';
GRANT REPLICATION SLAVE On * * to [email protected] '% ';
FLUSH privileges;
SET Sql_log_bin=1; (Not available)
Change MASTER to Master_user= ' repli ', master_password= ' [e-mail protected]% ' for CHANNEL ' group_replication_recovery ';
2 ), install the group plug-in
INSTALL PLUGIN group_replication SONAME ' group_replication.so ';
3 ), view the installation status and set up a whitelist
SHOW PLUGINS;
Show global variables like ' group_replication_ip_whitelist ';
SET GLOBAL group_replication_ip_whitelist= "188.102.17.179,188.102.17.180,188.102.4.131";
(After success in Add to config file, permanent)
4 ), initiate synchronous replication
SET GLOBAL Group_replication_bootstrap_group=on; (primary node must be executed, other nodes do not need to be executed)
START group_replication;
SET GLOBAL group_replication_bootstrap_group = OFF; (primary node must be executed, other nodes do not need to be executed)
To view cluster node conditions:
SELECT * from Performance_schema.replication_group_members;
To view transactional replication information:
SELECT * FROM Performance_schema.replication_connection_status\g
5 , Test
Master node:
CREATE DATABASE test;
Use test;
CREATE TABLE T1 (C1 INT PRIMARY KEY, C2 TEXT not NULL);
INSERT into T1 VALUES (1, ' Luis ');
SELECT * from T1;
SHOW BINLOG EVENTS;
From node:
show databases;
Use test
Show tables;
SELECT * from T1;
MySQL Group_replication Deployment Implementation