Deploying MariaDB Galera Cluster architecture in CentOS 7.4
Mariadb galera Cluster installation:
Operating System: CentOS 7.4
Number of clusters: 3 nodes
Host information: 192.168.153.142 node1 selinux = disabled firewalld disabled
192.168.153.143 node2 selinux = disabled firewalld disabled
192.168.153.144 node3 selinux = disabled firewalld disabled
Procedure
1. Mutual parsing between hosts: all three nodes must be executed
Vim/etc/hosts
192.168.153.142 node1
192.168.153.143 node2
192.168.153.144 node3
2. install the software package
Method 1: (yum install-y MariaDB-server MariaDB-client galera)
Configure yum installation source and mariadb galera installation Source
Mounting iso to yum source configuration
Set and install the yum source of mariadb (required for all nodes)
Modify yum source file
Vi/etc/yum. repos. d/mariadb. repo
[Mariadb]
Name = MariaDB
Base url = http://yum.mariadb.org/10.3.5/centos74-amd64
Gpgkey = https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
Gpgcheck = 1
Enabled = 0
When installing the galera software, you need to resolve its dependent package: boost-program-options.x86_64 (directly install the yum source)
Method 2: (rpm package installation) install the three Nodes
Download rpm package from Internet: galera-25.3.23-1.rhel7.el7.centos.x86_64.rpm
MariaDB-10.3.5-centos74-x86_64-client.rpm
MariaDB-10.3.5-centos74-x86_64-compat.rpm
MariaDB-10.3.5-centos74-x86_64-common.rpm
MariaDB-10.3.5-centos74-x86_64-server.rpm
Rpm-ivh MariaDB-10.3.5-centos74-x86_64-compat.rpm -- nodeps
Rpm-ivh MariaDB-10.3.5-centos74-x86_64-common.rpm
Rpm-ivh MariaDB-10.3.5-centos74-x86_64-client.rpm
Yum install-y boost-program-options.x86_64 (solves dependencies for installing galera)
Rpm-ivh galera-25.3.23-1.rhel7.el7.centos.x86_64.rpm
Rpm-ivh MariaDB-10.3.5-centos74-x86_64-server.rpm
3. mariadb initialization (all three nodes need to be executed)
After the installation is complete, you will be prompted to initialize mariadb (set the password)
Systemctl start mariadb
Mysql_secure_installation (set the mysql password as prompted)
Systemctl stop mariadb
4. Configure galera
Master node configuration file server. cnf
Vim/etc/my. cnf. d/server. cnf
[Galera]
Wsrep_on = ON
Wsrep_provider =/usr/lib64/galera/libgalera_smm.so
Wsrep_cluster_address = "gcomm: // 192.168.153.142, 192.168.153.143, 192.168.153.144"
Wsrep_node_name = node1
Wsrep_node_address = 192.168.153.142
Binlog_format = row
Default_storage_engine = InnoDB
Innodb_autoinc_lock_mode = 2
Wsrep_slave_threads = 1
Innodb_flush_log_at_trx_commit = 0
Innodb_buffer_pool_size = 120 M
Wsrep_sst_method = rsync
Wsrep_causal_reads = ON
Copy this file to the mariadb-2, mariadb-3, be sure to change wsrep_node_name and wsrep_node_address to the hostname and ip address of the corresponding node.
5. Start the cluster service:
Start the MariaDB Galera Cluster service:
[Root @ node1 ~] #/Bin/galera_new_cluster
The start method for the remaining two nodes is:
[Root @ node1 ~] # Systemctl start mariadb
View Cluster status: (the cluster service uses ports 4567 and 3306 ))
[Root @ node1 ~] # Netstat-tulpn | grep-e 4567-e 3306
Tcp 0 0 0.0.0.0: 4567 0.0.0.0: LISTEN 3557/mysqld
Tcp6 0 0 ::: 3306 ::: LISTEN 3557/mysqld
6. Verify the cluster status:
Run the following command on node1:
[Root @ node1 ~] # Mysql-uroot-p # enter the database
Check whether the galera plug-in is enabled
Connect to mariadb and check whether the galera plug-in is enabled
MariaDB [(none)]> show status like "wsrep_ready ";
+ --------------- + ------- +
| Variable_name | Value |
+ --------------- + ------- +
| Wsrep_ready | ON |
+ --------------- + ------- +
1 row in set (0.004 sec)
Current number of cluster machines
MariaDB [(none)]> show status like "wsrep_cluster_size ";
+ -------------------- + ------- +
| Variable_name | Value |
+ -------------------- + ------- +
| Wsrep_cluster_size | 3 |
+ -------------------- + ------- +
1 row in set (0.001 sec)
View Cluster status
MariaDB [(none)]> show status like "wsrep % ";
+ ------------------------------ + ---------------------------------------------------------------- +
| Variable_name | Value |
+ ------------------------------ + ---------------------------------------------------------------- +
| Wsrep_apply_oooe | 0.000000 |
| Wsrep_apply_ool | 0.000000 |
| Wsrep_apply_window | 1, 1.000000 |
| Wsrep_causal_reads | 14 |
| Wsrep_cert_deps_distance | 1, 1.200000 |
| Wsrep_cert_index_size | 3 |
| Wsrep_cert_interval | 0.000000 |
| Wsrep_cluster_conf_id | 22 |
| Wsrep_cluster_size | 3 |## cluster member
| Wsrep_cluster_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 | # unique UUID cluster tag
| Wsrep_cluster_status | Primary | # master server
| Wsrep_commit_oooe | 0.000000 |
| Wsrep_commit_oool | 0.000000 |
| Wsrep_commit_window | 1, 1.000000 |
| Wsrep_connected | ON | # Whether the connection is in progress
| Wsrep_desync_count | 0 |
| Wsrep_evs_delayed |
| Wsrep_evs_evict_list |
| Wsrep_evs_repl_latency | 0/0/0/0/0 |
| Wsrep_evs_state | OPERATIONAL |
| Wsrep_flow_control_paused | 0.000000 |
| Wsrep_flow_control_paused_ns | 0 |
| Wsrep_flow_control_recv | 0 |
| Wsrep_flow_control_sent | 0 |
| Wsrep_gcomm_uuid | 0eba3aff-2341-11e8-b45a-f277db2349d5 |
| Wsrep_incoming_addresses | 192.168.153.142: 3306,192.168 .153.143: 3306,192.168 .153.144: 3306 | # connected database
| Wsrep_last_committed | 9 | # SQL submission record
| Wsrep_local_bf_aborts | 0 | # The transaction execution process is interrupted locally.
| Wsrep_local_cached_downto | 5 |
| Wsrep_local_cert_failures | 0 |## local transaction failure
| Wsrep_local_commits | 4 | # locally executed SQL
| Wsrep_local_index | 0 |
| Wsrep_local_recv_queue | 0 |
| Wsrep_local_recv_queue_avg | 0.057143 |
| Wsrep_local_recv_queue_max | 2 |
| Wsrep_local_recv_queue_min | 0 |
| Wsrep_local_replays | 0 |
| Wsrep_local_send_queue | 0 | # locally sent queue
| Wsrep_local_send_queue_avg | 0.000000 |## average queue Interval
| Wsrep_local_send_queue_max | 1 |
| Wsrep_local_send_queue_min | 0 |
| Wsrep_local_state | 4 |
| Wsrep_local_state_comment | Synced |
| Wsrep_local_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 | # Cluster ID
| Wsrep_protocol_version | 8 |
| Wsrep_provider_name | Galera |
| Wsrep_provider_vendor | Codership Oy <info@codership.com> |
| Wsrep_provider_version | 25.3.23 (r3789) |
| Wsrep_ready | ON | # Whether the plug-in is being applied
| Wsrep_received ed | 35 | # Number of data copies received
| Wsrep_received_bytes | 5050 |
| Wsrep_repl_data_bytes | 1022 |
| Wsrep_repl_keys | 14 |
| Wsrep_repl_keys_bytes | 232 |
| Wsrep_repl_other_bytes | 0 |
| Wsrep_replicated | 5 | # Number of times the replication occurs
| Wsrep_replicated_bytes | 1600 | # number of bytes from Data Replication
| Wsrep_thread_count | 2 |
+ ------------------------------ + ---------------------------------------------------------------- +
58 rows in set (0.003 sec)
View connected hosts
MariaDB [(none)]> show status like "wsrep_incoming_addresses ";
+ -------------------------- + ------------------------------------------------------------------ +
| Variable_name | Value |
+ -------------------------- + ------------------------------------------------------------------ +
| Wsrep_incoming_addresses | 192.168.153.142: 3306,192.168 .153.143: 3306,192.168 .153.144: 3306 |
+ -------------------------- + ------------------------------------------------------------------ +
1 row in set (0.002 sec)
7. test whether the mariad data of the cluster is synchronized.
MariaDB [(none)]> create database lizk;
Query OK, 1 row affected (0.010 sec)
MariaDB [(none)]> show databases;
+ -------------------- +
| Database |
+ -------------------- +
| China |
| Hello |
| Hi |
| Information_schema |
| Lizk |
| Mysql |
| Performance_schema |
| Test |
+ -------------------- +
8 rows in set (0.001 sec)
On the other two nodes, you can view that the lizk library has been synchronized.
8. simulated split-brain processing
The following is a simulation of split-brain due to disconnection of two nodes when packet loss occurs during network jitter. Run the following commands on 192.168.153.143 and 192.168.153.144:
Iptables-a input-p tcp -- sport 4567-j DROP
Iptables-a input-p tcp -- dport 4567-j DROP
The preceding command is used to disable full synchronous replication of port 4567 for wsrep communication.
On the 192.168.153.142 node, View:
MariaDB [(none)]> show status like "ws % ";
+ ------------------------------ + -------------------------------------------- +
| Variable_name | Value |
+ ------------------------------ + -------------------------------------------- +
| Wsrep_apply_oooe | 0.000000 |
| Wsrep_apply_ool | 0.000000 |
| Wsrep_apply_window | 1, 1.000000 |
| Wsrep_causal_reads | 16 |
| Wsrep_cert_deps_distance | 1, 1.125000 |
| Wsrep_cert_index_size | 3 |
| Wsrep_cert_interval | 0.000000 |
| Wsrep_cluster_conf_id | 18446744073709551615 |
| Wsrep_cluster_size | 1 |
| Wsrep_cluster_state_uuid | b8ecf355-233a-11e8-825e-bb38179b0eb4 |
| Wsrep_cluster_status | non-Primary |
Split-brain occurs now, and the cluster cannot execute any commands.
To solve this problem, run
Set global wsrep_provider_options = "pc. bootstrap = true ";
Use this command to forcibly restore a split-brain node.
Verification:
MariaDB [(none)]> set global wsrep_provider_options = "pc. bootstrap = true ";
Query OK, 0 rows affected (0.015 sec)
MariaDB [(none)]> select @ wsrep_node_name;
+ ------------------- +
| @ Wsrep_node_name |
+ ------------------- +
| Node1 |
+ ------------------- +
1 row in set (0.478 sec)
Finally, we can recover the nodes 192.168.153.143 and 192.168.153.144. Just clear the iptables table (because my test environment is used, you need to delete the above rules in the production environment ):
[Root @ node3 mysql] # iptables-F
Verify after recovery:
MariaDB [(none)]> show status like "wsrep_cluster_size ";
+ -------------------- + ------- +
| Variable_name | Value |
+ -------------------- + ------- +
| Wsrep_cluster_size | 3 |
+ -------------------- + ------- +
1 row in set (0.001 sec)
9. Check whether the two nodes in the cluster can be synchronized after the service is restarted because of the fault;
Stop mariadb on 192.168.153.143 and 192.168.153.144:
[Root @ node2 mysql] # systemctl stop mariadb
Insert data on 192.168.153.142:
MariaDB [test]> select * from test1;
+ ------ +
| Id |
+ ------ +
| 2 |
| 2 |
| 1 |
| 3 |
+ ------ +
4 rows in set (0.007 sec)
Now restart the other two nodes in the cluster to view data consistency, which is the same as the data on the master node.
10. Exception Handling: when the data center suddenly loses power, all galera hosts shut down abnormally. After a call, the galera cluster service cannot be started normally. What should I do?
Step 2: Enable the mariadb service for the cluster master host of the galera cluster.
Step 2: Enable the mariadb service for the member hosts of the galera cluster.
Exception Handling: the mysql service of the group host and member host of the galera cluster cannot be started. How can this problem be solved?
Solution 1: Step 1: Delete the/var/lib/mysql/grastate. dat status file of the master host of the garlera group.
/Bin/galera_new_cluster start the service. Startup is normal. Log on to and view the wsrep status.
Step 2: delete the/var/lib/mysql/grastate. dat status file in the galera member host.
Systemctl restart mariadb restart the service. Startup is normal. Log on to and view the wsrep status.
Solution 2: Step 2: Modify the value 0 in the/var/lib/mysql/grastate. dat status file of the garlera group master host to 1.
/Bin/galera_new_cluster start the service. Startup is normal. Log on to and view the wsrep status.
Step 2: Modify the value 0 in the/var/lib/mysql/grastate. dat status file of the galera member host to 1.
Systemctl restart mariadb restart the service. Startup is normal. Log on to and view the wsrep status.
This article permanently updates link: https://www.bkjia.com/Linux/2018-03/151302.htm