Mha:master HA; is an open source MySQL high-availability program that provides automating master failover functionality for MySQL master-slave replication architecture. MHA when monitoring the failure of the master node, the slave node with the most recent data is promoted to the new master node, during which MHA can avoid consistency problems by obtaining additional information from the slave node. The MHA also provides the master node on-line switching function, i.e. switching master/slave nodes on demand
MHA Service has two roles: Management node (MHA Manager) and Data node (MHA nodes)
MHA Manager: Typically deployed separately on a standalone server to manage multiple master/slave clusters, each master/slave cluster is called a application
MHA Node: Running on each MySQL server (master/slave/manager), he accelerates failover by monitoring scripts with parsing and scavenging logs capabilities
Galera Cluster
Based on the WSREP protocol in the global implementation of replication, any one node can achieve read and write operations, no delay replication, no data loss, when a server down, the standby server will automatically take over.
First, based on Galera cluster to achieve MySQL high availability
Preparation Environment: CentOS 7
Node 1
|
ip:172.18.42.200
|
Node 2
|
ip:172.18.42.201
|
Node 3
|
ip:172.18.42.202
|
1. Deploy Node 1
(1) Install Galera service
[email protected] ~]# Yum install mariadb-galera-server-y
(2) edit its configuration file
[Email protected] ~]# rpm-ql Galera # #查看相关文件/usr/lib64/galera/libgalera_smm.so [[email protected] ~]# vim/etc/my.cn f.d/server.cnf[galera]# Mandatory Settings # #强制设置wsrep_provider =/usr/lib64/galera/libgalera_smm.so # #wsrep的提供者, is generally a plug-in, different installation version may not be the same wsrep_cluster_address= "gcomm://172.18.42.200, 172.18.42.201, 172.18.42.202" # # Indicates that each node of Galera-cluster Binlog_format=row # #二进制日志格式, which is the row format by default and is not recommended to change the Default_storage_engine=innodb # #指明使用的引擎innodb_ autoinc_lock_mode=2 # #锁格式bind-address=0.0.0.0 # #wsrep在工作时监听的地址wsrep_cluster_name = ' mycluster ' # #指明Galera集群的名称
(3) The cluster needs to be initialized at first boot
[Email protected] ~]#/etc/rc.d/init.d/mysql start--wsrep-new-cluster # #在某一个节点上启动mysql服务
2. Deploy Node 2
(1) Install Mariadb-galera-server service
[email protected] ~]# Yum install mariadb-galera-server-y
(2) Start-up service
[[Email protected] ~]# service mysql startstarting mysql .... SST in progress, setting sleep higher. success!
3. Deploy Node 3
(1) Install Mariadb-galera-server service
[email protected] ~]# Yum install mariadb-galera-server-y
(2) Start-up service
[[Email protected] ~]# service mysql startstarting mysql .... SST in progress, setting sleep higher. success!
4, three nodes are connected to the MySQL service, then create a database to see whether the other two nodes are replicated
[[email protected] ~]# mysql # #节点1连接mysql服务MariaDB [(none)]> create database magerepo; # #在节点1上创建数据库 "Magerepo" and View mariadb [(none)]> show databases;+--------------------+| database +--------------------+| information_schema | magerepo | mysql | performance_schema | test +--------------------+[[email protected] ~]# mysql # #节点2连接至mysql服务MariaDB [(none)]> show databases; # #查看数据库 +--------------------+| database +--------------------+| information_schema | magerepo | mysql | performance_schema | test +--------------------+[[email protected] ~]# mysql ## Node 3 connect to MySQL service mariadb [(none)]> show databases; # #查看数据库 +-------------------- +| database +--------------------+| information_schema | MaGeRepo | mysql | performance_schema | test +--------------------+# #数据库实现了同步
5, the Node 2 downtime, and then in the database "Magerepo" to create a table "MaGe";
MariaDB [magerepo]> CREATE TABLE MaGe (ID int unsigned auto_increment NOT null primary key,name char (10)); # #在节点1上创建表 "MaGe" MariaDB [magerepo]> insert INTO MaGe (Name) VALUES (' MaGe '), (' Lweim '); # #在节点1中插入数据 "Lweim", "MaGe" MariaDB [magerepo]> insert INTO MaGe (Name) VALUES (' WTC '), (' wzx '); # #在节点3中插入数据 "WTC", "WZX"
6. Node 2 Open the MySQL server and view its database
[[Email protected] ~]# service MySQL start # #启动节点2Starting MySQL ..... SST in progress, setting sleep higher. success! [email protected] ~]# MySQL # #连接至mysql服务MariaDB [magerepo]> select * from MaGe; # #查看表中的数据 +----+-------+| ID | Name +----+-------+| 1 | MaGe | 3 | lweim| 4 | WTC | 6 | WZX +----+-------+
This shows that, when one of the servers down, even if the data changes, after re-online can also synchronize data, but it is important to note that the ID does not grow in the automatic growth sequence, the solution is as follows:
A: Set a global Assignment ID generator To resolve the problem of inconsistent ID order when inserting data
B: Manually specify the ID number so that it is not automatically generated
Second, the high availability of MySQL based on MHA
Preparation Environment: CentOS 7
MHA Manager node
|
ip:172.18.42.200
|
MHA Node Master nodes
|
ip:172.18.42.201
|
MHA node SLAVE1 nodes
|
ip:172.18.42.202
|
MHA node SLAVE2 nodes
|
ip:172.18.42.203
|
Premise: When MySQL is highly available based on MHA, the SSH key is required to communicate between the nodes
[[email protected] ~]# ssh-keygen -t rsa -p ' ## Generate the secret key on the manager node based on the RSA algorithm, the password is empty [[email protected] ~]# cat .shh/id_rsa.pub &>> .shh/authorized_keys # #先要确认能与本地进行通信 [[email protected] ~]# chmod 600 .ssh/id_rsa .ssh/authorized_keys # #更改秘钥权限 [[email protected] ~]# scp -p .ssh/id_rsa .ssh/authorized_keys [email protected]:/root/ . ssh/ # #Manager节点把秘钥, the public key is sent to each node, so that each node can make secret key communication (first this communication requires a secret key, subsequent not required) [[email protected] ~]# scp -p .ssh/id_rsa .ssh/authorized_keys [email protected]:/root/ . Ssh/[[email protected] ~]# scp -p .ssh/id_rsa .ssh/authorized_keys [email protected]:/root/.ssh/[[email protected] ~]# ssh 172.18.42.201 ' Ifconfig ' # #可基于此命令让各个节点之间进行测试Do you want to establish a connection are you sure you want to continue connecting (yes/no)? yes # #第一次建立连接需要认证 [email protected] ' S password:
1. Deploy the MHA Manager node
(1) Installation of Mha4mysql-manager, Mha4mysql-node
[email protected] ~]# Yum install mha4mysql-node-0.56-0.el6.noarch.rpm mha4mysql-manager-0.56-0.el6.noarch.rpm-y
(2) Create a configuration file and edit
[roo[email protected] ~]# vim /etc/masterha/app1.cnf [server default]user=wtc # #能远程连接至各mysql节点的用户管理账号, the MySQL node is required to create this user password=wtc manager_workdir=/data/masterha/app1 # #管理节点的工作目录 manager_log=/ data/masterha/app1/manager.log # #管理节点的日志文件路径remote_workdir =/data/masterha/app1 ## Each MySQL node in the remote provides a working directory for the manager node, which automatically creates the ssh_user=root # #以root用户远程管理, which requires no password based on the key authentication, and if not based on the password, Need to indicate password repl_user=repluser # #复制权限用户账号repl_password =replpassping_interval=1 # #每隔多长时间探测一次Master节点是否在线 [server1] # #定义mysql节点hostname = 172.18.42.201candidate_master=1 # #当主节点宕机以后 Whether this node can become the new Master node # #no_master =1 # #当主节点宕机后, this node does not become the new primary node [server2]hostname=172.18.42.202candidate_master=1[server3]hostname= 172.18.42.203candidate_master=1
2. Deploy Master Node
(1) Installation of Mariadb-server, Mha4mysql-node service
[[Email protected] ~] Yum Install Mariadb-server mha4mysql-node-0.56-0.el6.noarch.rpm-y
(2) edit its configuration file
[Email protected] ~]# vim/etc/my.cnf[mysqld]innodb_file_per_table = Onskip_name_resolve = Onlog_bin=mysql-bin Relay-log=relay-logserver-id=1[[email protected] ~]# systemctl start mariadb.servicemariadb [(None)]> Show Master Status;mysql-bin.000003 245MariaDB [(None)]> Grant replication Slave,replication Client on *. * to ' repluser ' @ ' 172. 18.%.% ' identified by ' replpass '; # #创建具有复制权限的用户
3. Deploy SLAVE1
(1) Installation of Mariadb-server, Mha4mysql-node service
[[Email protected] ~] Yum Install Mariadb-server mha4mysql-node-0.56-0.el6.noarch.rpm-y
(2) edit its configuration file
[Email protected] ~]# vim/etc/my.cnf[mysqld]innodb_file_per_table = Onskip_name_resolve = onlog_bin= Mysql-binrelay-log=relay-logserver-id=2read_only=1 # #MHA通过这个来判断那个是mysql主服务器 Relay_log_purge=0[[email protected] ~] # Systemctl Start mariadb.servicemariadb [(none)]> change master to master_host= ' 172.18.42.201 ', master_user= ' Repluser ', master_password= ' Replpass ', master_log_file= ' mysql-bin.000003 ', master_log_pos=245; # #连接至Master节点MariaDB [(none)]> start slave; MariaDB [(none)]> show Slave status\g; Slave_io_running:yes Slave_sql_running:yes
4. Deploy Slave2
(1) Installation of Mariadb-server, Mha4mysql-node service
[[Email protected] ~] Yum Install Mariadb-server mha4mysql-node-0.56-0.el6.noarch.rpm-y
(2) edit its configuration file
[Email protected] ~]# vim/etc/my.cnfinnodb_file_per_table = Onskip_name_resolve = onlog_bin=mysql-binrelay-log= Relay-logserver-id=3read_only=1relay_log_purge=0[[email protected] ~]# systemctl start mariadb.servicemariadb [(None )]> change Master to master_host= ' 172.18.42.201 ', master_user= ' repluser ', master_password= ' Replpass ', Master_log_ File= ' mysql-bin.000003 ', master_log_pos=245; # #连接至Master节点MariaDB [(none)]> start slave; MariaDB [(none)]> show Slave status\g; Slave_io_running:yes Slave_sql_running:yes
5. Create an administrative user account on the master node, and detect the connection between the inspections at the manager node
MariaDB [(None)]> grant all on * * to ' WTC ' @ ' 172.18.%.% ' identified by ' WTC '; # #实现了主从复制, while the Msater node is created, the slave node will also be created [[email protected] ~]# masterha_check_ssh--CONF=/ETC/MASTERHA/APP1.CNF # # Test whether SSH communicates with each other between nodes okwed APR 10:17:40-[INFO] All SSH connection tests passed successfully. [Email protected] ~]# masterha_check_repl--conf=/etc/masterha/app1.cnf # #检查各个节点之间主从复制是否ok172.18.42.201 ( 172.18.42.201:3306) (current master) # #201是主节点, 202, 203 is from Node +--172.18.42.202 (172.18.42.202:3306) +--172.18.42.203 ( 172.18.42.203:3306) MySQL Replication Health is OK.
6, start the Manager node service, the master node down to see if the conversion
[[email protected] ~]# masterha_manager --conf=/etc/masterha/app1.cnf # #启动Manager服务Wed Apr 20 10:27:35 2016 - [warning] Global Configuration file /etc/masterha_default.cnf not found. skipping.wed apr 20 10:27:35 2016 - [info] reading application default configuration from /etc/masterha/app1.cnf. Wed apr 20 10:27:35 2016 - [info] reading server configuration from /etc/masterha/app1.cnf. [[email protected] ~]# systemctl stop mariadb.service ## Turn off the MARIADB Service for Node 2 mariadb [(None)]> show slave status\g; ## View the replication thread state of the Slave2 node master_host: 172.18.42.202 # #Master节点转为Slave1节点了 slave_io_ Running: yes slave_sql_running: yes
7. Now we open the master node to see if it becomes a slave node
[[email protected] ~]# systemctl start mariadb.service ## MARIADB Service on Master node mariadb [(none)]> change master to master_host= ' 172.18.42.202 ' , master_user= ' Repluser ', master_password= ' Replpass ', master_log_file= ' mysql-bin.000003 ', master_log_pos=245; # #此时 "Master_host" should point to Slave1 ip# #使用的二进制日志文件, the event is backed up (one copy before implementing master-slave replication) mariadb [(none)]> start slave; # #启动复制线程MariaDB [(none)]> show slave status\g; master_host: 172.18.42.202 # #此时IP为Slave1的IP slave_io_running: yes slave_ sql_running: yes[[email protected] ~]# masterha_check_repl --conf=/etc/masterha/ app1.cnf # #在Manager节点上查看一次, whether master-slave replication ok172.18.42.202 (172.18.42.202:3306) (current Master) # #此时, the master node becomes slave1 +--172.18.42.201 (172.18.42.201:3306) +--172.18.42.203 ( 172.18.42.203:3306) Mysql replication&nbsP Health is ok.
Summary of issues:
1, Masterha_manager operation is working at the front desk, and can not be disconnected, once the master-slave switch, stop working, we need to manually turn it on, then need to script to implement self-monitoring
2, when using the MHA method, the service that starts the manager node may fail, you can try to first create a "/data/masterha/app1" of the manager node's log file path.
3, when configuring the configuration file of each node of MySQL, master and slave need to turn on the relay log, binary log
The primary node turns on the trunk log and the binary log because the primary node becomes slave when enabled
From the node needs to turn on the trunk log and binary log, because the primary node after the outage, any one from the node can be transferred is called the primary node
This article is from the "WTC" blog, so be sure to keep this source http://wangtianci.blog.51cto.com/11265133/1790229
MySQL high availability based on MHA and Galera cluster