MySQL's high-availability architecture-mha

Source: Internet
Author: User
Tags failover install perl

MySQL high availability currently has HEARTBEAT+DRBD, MHA, MySQL replication, and several more mature scenarios, heartbeat+drbd of poor scalability, and read and write are responsible for the primary server, from the library does not provide read function, suitable for data growth is not small, A highly consistent environment, such as banking, finance, etc. Today, focus on the high-availability architecture of MHA.

The MHA is an excellent high-availability software for failover and master-slave enhancement in high-availability environments. During the MySQL failover process, MHA can automatically complete the database failover within 0-30 seconds, and in the process of switching, to maximize the consistency of the data to achieve a true sense of high availability. MHA high availability is based on MySQL master-slave replication, first understand the following MySQL replication most common two ways:

    • Asynchronous replication: After the main library writes and commits the transaction, write the records into the main repository binary log is returned to the client, the main library and the data from the library there is a certain delay, so there is a hidden danger, when the main library committed a transaction, and write to the binary log, and the library has not yet been pushed by the main library of the binary log, the main library downtime will result in inconsistent data from the master and slave servers.

    • Semi-synchronous replication: The main library does not feed back to the client when each commit transaction succeeds, but waits for one of them to receive a binary log from the library and write to the relay log before returning the operation to the client successfully.

MHA Composition:

    • MHA Manager: A management node that can be deployed separately on a single server, manage multiple master-slave clusters, or deploy on a single slave.

    • MHA node: Data nodes, running on each MySQL server.

MHA Manager periodically probes the master node in the cluster, and when master fails, it automatically promotes the slave of the latest data to the new master, and then points all other slave back to the new master. The entire failover process is completely transparent to the application.

MHA Working principle:

1) Save binary log events from the master of the outage

2) identify the slave that contain the latest updates

3) Apply the differential relay log to the other slave server

4) Apply binary log events saved from master

5) Upgrade a new slave to master

6) Enable other slave to connect to the new master and copy


Example: MHA high-availability architecture (if the firewall can be turned off in the intranet, open the appropriate port)

manager:node1:192.168.154.128

master:node2:192.168.154.156

slave:node3:192.168.154.130

slave:node4:192.168.154.154


One configuration Master-slave replication:

1) Master node:

[Email protected] ~]# VIM/ETC/MY.CNF

Innodb_file_per_table=1 #开启独立的表空间

Skip_name_resolve #禁止域名解析

Log-bin=master-bin

Relay-log=relay-bin

Server-id=1

[Email protected] ~]# service mysqld restart

viewing binary log information

Mysql> Show master status;

+-------------------+----------+--------------+------------------+

| File | Position | binlog_do_db | binlog_ignore_db |

+-------------------+----------+--------------+------------------+

|      master-bin.000001 |              106 |                  | |

+-------------------+----------+--------------+------------------+

1 row in Set (0.00 sec)

To establish an authorized user:

Mysql> Grant replication Slave,replication Client on * * to ' slave ' @ ' 192.168.154.% ' identified by ' slave ';

Query OK, 0 rows affected (0.06 sec)


mysql> flush Privileges;

Query OK, 0 rows Affected (0.00 sec)


2) from the node:

[Email protected] ~]# VIM/ETC/MY.CNF

Innodb_file_per_table=1

Skip_name_resolve

Log-bin=slave-bin

Relay-log=relay-bin

server_id=2

Read_only=1

Relay_log_purge=0

[Email protected] ~]# service mysqld restart


[Email protected] ~]# VIM/ETC/MY.CNF

Innodb_file_per_table=1

Skip_name_resolve

Log-bin=slave-bin

Relay-log=relay-bin

Server_id=3

Read_only=1 #开启只读模式

Relay_log_purge=0 #关闭自动清理中继日志

[Email protected] ~]# service mysqld restart


To set up synchronization:

mysql> Change Master to master_host= ' 192.168.154.156 ', master_user= ' slave ', master_password= ' slave ', master_log_ File= ' master-bin.000001 ', master_log_pos=106;

Query OK, 0 rows affected (0.03 sec)


mysql> start slave;

Query OK, 0 rows affected (0.01 sec)


Mysql> Show Slave Status\g

1. Row ***************************

Slave_io_state:waiting for Master to send event

master_host:192.168.154.156

Master_user:slave

master_port:3306

Connect_retry:60

master_log_file:master-bin.000001

read_master_log_pos:354

relay_log_file:relay-bin.000002

relay_log_pos:500

relay_master_log_file:master-bin.000001

Slave_io_running:yes

Slave_sql_running:yes

replicate_do_db:

replicate_ignore_db:

Replicate_do_table:

Replicate_ignore_table:

Replicate_wild_do_table:

Replicate_wild_ignore_table:

last_errno:0

Last_error:

skip_counter:0

exec_master_log_pos:354

relay_log_space:649

Until_condition:none

Until_log_file:

until_log_pos:0

Master_ssl_allowed:no

Master_ssl_ca_file:

Master_ssl_ca_path:

Master_ssl_cert:

Master_ssl_cipher:

Master_ssl_key:

seconds_behind_master:0

Master_ssl_verify_server_cert:no

last_io_errno:0

Last_io_error:

last_sql_errno:0

Last_sql_error:

1 row in Set (0.00 sec)


3) Create an account with administrative privileges on the master node

Mysql> Grant All on * * to ' zwj ' @ ' 192.168.154.% ' identified by ' ZWJ ';

Query OK, 0 rows Affected (0.00 sec)


Two-configuration key login between clusters

On the Node1:

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[[email protected] ~]# ssh 192.168.154.154 ' ifconfig ' #验证

Eth0 Link encap:ethernet HWaddr 00:0c:29:67:65:ed

inet addr:192.168.154.154 bcast:192.168.154.255 mask:255.255.255.0

Inet6 ADDR:FE80::20C:29FF:FE67:65ED/64 Scope:link

Up broadcast RUNNING multicast mtu:1500 metric:1

RX packets:26253 errors:0 dropped:0 overruns:0 frame:0

TX packets:42416 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:23453164 (22.3 MiB) TX bytes:2514457 (2.3 MiB)

Interrupt:19 Base address:0x2024


On the Node2:

[[email protected] ~]# ssh-keygen-t RSA

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]


On the NODE3:

[[email protected] log]# ssh-keygen-t RSA

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]


On the NODE4:

[[email protected] ~]# ssh-keygen-t RSA

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]

[Email protected] ~]# ssh-copy-id-i/.ssh/id_rsa.pub [email protected]


Three-install MHA Manager on Node1:

[email protected] ~]# Yum install perl-dbd-mysql-y

[Email protected] ~]# TAR-ZXF mha4mysql-node-0.56.tar.gz

[Email protected] ~]# CD mha4mysql-node-0.56

[Email protected] mha4mysql-node-0.56]# Perl makefile.pl

[[email protected] mha4mysql-node-0.56]# make

[[email protected] mha4mysql-node-0.56]# make install


[email protected] mha4mysql-manager-0.56]# Yum install perl-dbd-mysql perl-config-tiny perl-log-dispatch Perl-parallel-forkmanager perl-time-hires-y # installation of MHA Manger dependent Perl modules

[Email protected] ~]# TAR-ZXF mha4mysql-manager-0.56.tar.gz

[Email protected] ~]# CD mha4mysql-manager-0.56

[Email protected] mha4mysql-manager-0.56]# Perl makefile.pl

[[email protected] mha4mysql-manager-0.56]# make

[[email protected] mha4mysql-manager-0.56]# make install


Quad Install MySQL node (on all MySQL servers)

[[Email protected] ~] #yum Install Perl-dbd-mysql-y

[Email protected] ~]# CD mha4mysql-node-0.56/

[Email protected] mha4mysql-node-0.56]# Perl makefile.pl

[[email protected] mha4mysql-node-0.56]# make

[[email protected] mha4mysql-node-0.56]# make install


Five Create working directory, configure MHA:

[Email protected] ~]# Mkdir-pv/etc/masterha

[Email protected] ~]# VIM/ETC/MASTERHA/APPL.CNF

[Server default]

User=zwj

Password=zwj

Manager_workdir=/etc/masterha/appl

Manager_log=/etc/masterha/appl/manager.log

Remote_workdir=/etc/masterha/appl

Ssh_user=root

Repl_user=slave

Repl_password=slave

Ping_interval=1


[Server1]

hostname=192.168.154.156


[Server2]

hostname=192.168.154.130

Candidate_master=1 #设置为备选的master


[Server3]

hostname=192.168.154.154


Six check SSH connection status:

[Email protected] ~]# masterha_check_ssh--conf=/etc/masterha/appl.cnf

Wed 00:12:58-[WARNING] Global configuration file/etc/masterha_default.cnf not found. Skipping.

Wed 00:12:58-[INFO] Reading application default configuration from/etc/masterha/appl.cnf.

Wed 00:12:58-[INFO] Reading server configuration from/etc/masterha/appl.cnf.

Wed 00:12:58-[INFO] Starting SSH connection tests.

Wed 00:13:15-[debug]

Wed 00:12:59-[debug] connecting via SSH from [email protected] (192.168.154.154:22) to [email protected] (192 .168.154.156:22)..

Wed 00:13:05-[debug] OK.

Wed 00:13:05-[debug] connecting via SSH from [email protected] (192.168.154.154:22) to [email protected] (192 .168.154.130:22)..

Wed 00:13:15-[debug] OK.

Wed 00:13:20-[debug]

Wed 00:12:58-[debug] connecting via SSH from [email protected] (192.168.154.130:22) to [email protected] (192 .168.154.156:22)..

Wed 00:13:11-[debug] OK.

Wed 00:13:11-[debug] connecting via SSH from [email protected] (192.168.154.130:22) to [email protected] (192 .168.154.154:22)..

Wed 00:13:20-[debug] OK.

Wed 00:13:35-[debug]

Wed 00:12:58-[debug] connecting via SSH from [email protected] (192.168.154.156:22) to [email protected] (192 .168.154.130:22)..

Wed 00:13:15-[debug] OK.

Wed 00:13:15-[debug] connecting via SSH from [email protected] (192.168.154.156:22) to [email protected] (192 .168.154.154:22)..

Wed 00:13:35-[debug] OK.

Wed 00:13:35-[INFO] All SSH connection tests passed successfully.


Seven check the entire replication environment:

[Email protected] ~]# MASTERHA_CHECK_REPL--conf=/etc/masterha/appl.cnf

...

192.168.154.156 (192.168.154.156:3306) (current master)

+--192.168.154.130 (192.168.154.130:3306)

+--192.168.154.154 (192.168.154.154:3306)


Wed 00:33:36-[INFO] Checking replication Health on 192.168.154.130.

Wed 00:33:36-[info] OK.

Wed 00:33:36-[INFO] Checking replication Health on 192.168.154.154.

Wed 00:33:36-[info] OK.

Wed 00:33:36-[WARNING] master_ip_failover_script is not defined.

Wed 00:33:36-[WARNING] shutdown_script is not defined.

Wed 00:33:36-[INFO] Got exit code 0 (not master dead).


MySQL Replication Health is OK.


Eight turn on MHA Manager monitoring:

[Email protected] ~]# nohup masterha_manager--conf=/etc/masterha/appl.cnf >/etc/masterha/appl/manager.log 2>& Amp;1 &

[1] 8300

View MHA Manager monitoring:

[Email protected] ~]# Masterha_check_status--conf=/etc/masterha/appl.cnf

APPL (PID:8300) is running (0:PING_OK), master:192.168.154.156

To turn off MHA Manager monitoring:

[Email protected] ~]# masterha_stop--conf=/etc/masterha/appl.cnf

Stopped APPL successfully.

[1]+ Exit 1 nohup masterha_manager--conf=/etc/masterha/appl.cnf >/etc/masterha/appl/manager.log 2&G T;&1


Nine analog Main Library outage:

[[Email protected] ~]# service mysqld stop

Stopping mysqld: [OK]

View Slave (NODE4), visible that master has changed,

...

Mysql> Show Slave Status\g

1. Row ***************************

Slave_io_state:waiting for Master to send event

master_host:192.168.154.130

Master_user:slave

master_port:3306

Connect_retry:60

master_log_file:slave-bin.000003

read_master_log_pos:106

relay_log_file:relay-bin.000002

relay_log_pos:251

relay_master_log_file:slave-bin.000003

Slave_io_running:yes

Slave_sql_running:yes



This article from "10,000 years too long, seize" blog, please be sure to keep this source http://zengwj1949.blog.51cto.com/10747365/1923915

MySQL's high-availability architecture-mha

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.