MHA (Master high availability) is currently a relatively mature solution for MySQL highly available, developed by the Japanese DeNA company Youshimaton (now on Facebook) and is an excellent set of MySQL high availability Highly available software for failover and master-slave promotion in the environment.
During the MySQL failover process, MHA can automatically complete the failover operation of the database within 0-30 seconds, and in the process of failover, MHA can ensure the consistency of the data to the maximum extent, in order to achieve a true sense of high availability. There are two characters in the MHA. One is MHA node (data nodes) and the other is MHA Manager (Management node). MHA Manager can be deployed individually on a separate machine to manage multiple master-slave clusters or on a single slave node. MHA node runs on each MySQL server, MHA Manager periodically probes the master node in the cluster, and when master fails, it automatically promotes the slave of the latest data to the new master, and then points all other slave back to the new Master The entire failover process is completely transparent to the application.
During the MHA automatic failover process, MHA attempts to save the binary log from the main server of the outage, with maximum assurance that the data is not lost, but this is not always possible.
For example, if the primary server hardware fails or cannot be accessed via SSH, MHA cannot save the binary logs and only fail over to lose the latest data. Using MySQL 5.5 's semi-synchronous replication can greatly reduce the risk of data loss. MHA can be combined with semi-synchronous replication. If only one slave has received the latest binary log, MHA can apply the latest binary logs to all other slave servers, thus guaranteeing data consistency across all nodes. Note: Starting with MySQL5.5, MySQL supports semi-synchronous replication in the form of plugins.
How to understand the semi-synchronous? First, let's look at the concept of asynchronous, full synchronization: Asynchronous replication (asynchronous replication) The default replication of MySQL is asynchronous, and the main library returns the results to the client immediately after executing a client-submitted transaction, and does not care if the library has been received and processed. So there is a problem, if the Lord crash off, at this time the master has submitted the transaction may not be uploaded from the top, if at this time, forcibly will be from Ascension, may lead to the new masters of the data is incomplete. Full synchronous replication (Fully synchronous replication) means that when the main library executes a transaction, all the transactions from the library are executed to return to the client. Because it is necessary to wait for all the transactions from the library to be returned, the performance of full synchronous replication must be severely impacted. Semi-synchronous replication (semisynchronous replication) is between asynchronous replication and full-replication, and the main library does not return to the client immediately after executing a client-submitted transaction, but waits for at least one received from the library and written to the relay log to return to the client. Semi-synchronous replication improves data security relative to asynchronous replication, and it also causes a certain amount of latency, which is at least a TCP/IP round-trip time. Therefore, semi-synchronous replication is best used in low latency networks. Here's a look at the schematic diagram of the semi-synchronous replication:
Summary: Similarities and differences between asynchronous and semi-synchronous
By default, MySQL replication is asynchronous, and all update operations on Master do not ensure that all updates are copied to Slave after they are written to Binlog. Asynchronous operations are highly efficient, but there is a high risk of data synchronization and possibly loss of data in the event of a master/slave problem. MYSQL5.5 introduces the semi-synchronous replication function to ensure that at least one Slave data is intact when the master is in trouble. In the case of time-outs, it is also possible to temporarily transfer to asynchronous replication, guaranteeing the normal use of the business until a salve is catching up and continuing to switch to semi-synchronous mode.
Working principle
Compared to other HA software, the MHA is designed to maintain the high availability of the Master library in MySQL Replication, with the biggest feature being the ability to repair the difference logs between multiple Slave, eventually keeping all Slave consistent, then choosing a new Mas ter, and point the other Slave at it. -Save the binary log event (binlogevents) from the master of the outage crash. -Identify slave that contain the latest updates. -Apply the difference of the trunk log (relay log) to the other slave. -Apply the binary log event (binlogevents) saved from master. -Upgrade a Slave to the new master. -Make other slave connections to the new master for replication.
At present, MHA mainly supports a master and many from the architecture, to build MHA, requires a replication cluster must have at least three database servers, a primary two from, that is, one to act as master, one to act as a backup master, and another to act as a slave library,
Because at least three servers are required.
Next deploy the MHA, the concrete construction environment is as follows
where master provides write services, alternative master (actual slave, hostname CENTOS3) provides read services, slave also provides related read services, and once Master is down, the alternate master will be promoted to the new Master,slave point to the new Master,manager as a Management server.
First, the basic environment preparation
1, after configuring the IP address check selinux,iptables settings, turn off SELinux, iptables service so that the late master-slave synchronization is not wrong
Note: Time to synchronize
Vim/etc/chrony.conf
Configure the Hosts file: vim/etc/hosts
2, in four machines are configured Epel source
Yum Install Epel-release
1. Install MHA on all hosts (requires the system's own Yum Source and network)
# yum-y Install perl-dbd-mysql perl-config-tiny perl-log-dispatch perl-parallel-forkmanager perl-Config-IniFiles ncftp Perl-params-validate Perl-cpan perl-test-mock-lwp.noarch Perl-lwp-authen-negotiate.noarch perl-devel Perl-extutils-cbuilder Perl-extutils-makemaker
3. Setting up SSH without interactive login environment
Manager Host:
Or: For i in Master1 master2 slave manager; Do ssh-copy-id-i/root/.ssh/id_rsa.pub%i; Done
Master Host:
Or: For i in Master1 master2 slave manager; Do ssh-copy-id-i/root/.ssh/id_rsa.pub%i; Done
Master2 Host:
Or: For i in Master1 master2 slave manager; Do ssh-copy-id-i/root/.ssh/id_rsa.pub%i; Done
Slave Host:
Or: For i in Master1 master2 slave manager; Do ssh-copy-id-i/root/.ssh/id_rsa.pub%i; Done
Test ssh without interactive login ( both SSH )
Example: ssh Master2,slave,manager
Second, configure MySQL semi-synchronous replication
In order to minimize the loss of data due to the failure of the main library hardware, it is recommended to configure MHA as a semi-synchronous copy of MySQL.
Note: MySQL semi-synchronous plugin is provided by Google, the specific location/usr/local/mysql/lib/plugin/, one is the master with the semisync_master.so, one is slave with the SEMISYNC_ Slave.so, let's have a detailed configuration. If you do not know the Plugin directory, look for the following:
1. Install the plugin (master, candicate Master,slave) on the master-slave node separately. Installing plugins on MySQL requires the database to support dynamic loading. Check whether it is supported, using the following detection:
All MySQL database servers, install the semi-synchronous plug-in (semisync_master.so,semisync_slave.so) ( except manager )
Other MySQL hosts use the same method to install
Check if Plugin is installed correctly:
Mysql> Show plugins;
Or
Mysql> select * from Information_schema.plugins;
View semi-sync related information
You can see that the half-copy plugin has been installed, just not enabled, so it is off
2, modify the my.cnf file, configure master-Slave synchronization:
Note: If the main MySQL server already exists, only later build from the MySQL server, before the provisioning data synchronization should be the primary MySQL server to synchronize the database to the MySQL server (such as the first backup of the database on the primary MySQL, and then use the backup on the MySQL server to restore Complex
Master2 Host:
View semi-sync related information
Mysql>show variables like '%rpl_semi_sync% ';
To view the semi-sync status:
Mysql>show status like '%rpl_semi_sync% ';
Master Host:
The first grant command is to create an account for master-slave replication, which can be created on master and Master2 hosts.
The second Grant command is to create a MHA admin account, which is required on all MySQL servers. MHA will require the ability to telnet to the database in the configuration file, so the necessary empowerment is required.
Master2 Host:
To view the status from, the following two values must be yes, representing a normal connection from the server to the primary server
Slave_io_running:yes
Slave_sql_running:yes
Slave Host:
To view the status from, the following two values must be yes, representing a normal connection from the server to the primary server
Slave_io_running:yes
Slave_sql_running:yes
To view the semi-sync status of the master server:
Mysql>show status like '%rpl_semi_sync% ';
Third, configuration Mysql-mha
MHA includes the manager node and the data node, the data node includes the original MySQL replication structure of the host, at least 3 units, that is, 1 Master 2 from, when the masterfailover, but also to ensure the master-slave structure; just install node package. Manager server: Run the monitoring scripts, which are responsible for monitoring and auto-failover, and need to install node packages and manager packages. ( because the top is already installed, so it is not necessary)
1. Install MHA on all hosts (requires the system's own Yum Source and network)
# yum-y Install perl-dbd-mysql perl-config-tiny perl-log-dispatch perl-parallel-forkmanager perl-Config-IniFiles ncftp Perl-params-validate Perl-cpan perl-test-mock-lwp.noarch Perl-lwp-authen-negotiate.noarch perl-devel Perl-extutils-cbuilder Perl-extutils-makemaker
2, the following Operations management node requires two are installed, in 3 database nodes as long as the installation of MHA node nodes:
Install mha4mysql-node-0.56.tar.gz on all database nodes (master1 master2 slave)
Two other data nodes also installed mha4mysql-node-0.56.tar.gz (process slightly)
Two installations are required on the management node: mha4mysql-node-0.56.tar.gz and mha4mysql-manager-0.56.tar.gz install mha4mysql-node-0.56.tar.gz (Manager )
Enter as prompted.
3, Configuration MHA
Like most Linux applications, the correct use of MHA relies on a reasonable configuration file. The MHA configuration file is similar to the MySQL my.cnf file configuration, taking the Param=value way to configure the configuration file located in the management node, usually including each MySQL server hostname, MySQL username, password, working directory and so on. To edit the/etc/masterha/app1.conf, the contents are as follows:
Save exit ( equivalent to emptying him )
SSH validation: [[email protected] ~]
# masterha_check_ssh--GLOBAL_CONF=/ETC/MASTERHA/MASTERHA_DEFAULT.CNF--conf=/etc/masterha/app1.cnf
Validation of cluster replication:
MySQL must all start [[email protected] ~]
# MASTERHA_CHECK_REPL--GLOBAL_CONF=/ETC/MASTERHA/MASTERHA_DEFAULT.CNF--conf=/etc/masterha/app1.cnf
Note: If validation succeeds, all servers and master-slave conditions are automatically recognized.
If you encounter this error: Can ' t exec "mysqlbinlog" ... The workaround is to execute on all servers:
Ln-s/usr/local/mysql/bin/*/usr/local/bin/
To start the manager:
Note: When applying unix/linux, we generally want a program to run in the background, so we will often use & at the end of the program to let the program run automatically. For example we want to run MySQL in the background:/usr/local/mysql/bin/mysqld_safe–user=mysql &. But there are a lot of programs that don't want to mysqld, so we need nohup commands,
Status check:
Failover Verification: (Auto failover)
After master dead, MHA was turned on and the candidate Master Library (Slave) was automatically failover as master. The way to verify is to first stop master (CENTOS2), because in the previous configuration file, Candicate Master (CENTOS3) as the candidate, then to Slave (CENTOS4) to see if Master's IP became CENTOS3 The IP
1) Stop master to stop MySQL on master (192.168.1.102)
2) Check the MHA log above in the configuration file specified in the log location is/masterha/app1/manager.log [[email protected] ~]
# Cat/masterha/app1/manager.log
View slave status
Mysql> show Slave status\g;
1. Row ***************************
Slave_io_state:waiting for Master to send event master_host:192.168.1.103 master_user:mharep master_port:3306 Connect_ Retry:60 master_log_file:mysql-bin.000009 read_master_log_pos:107 relay_log_file:relay-bin.000002 Relay_Log_Pos: 253 relay_master_log_file:mysql-bin.000009 Slave_io_running:yes Slave_sql_running:yes
1) Check if there are any of the following files, or delete them.
2) after the master-slave switch, the Mhamanager service will automatically stop, and in the Manager_workdir (/MASTERHA/APP1) directory generated file App1.failover.complete, to start MHA, you must first ensure that no this file) If you have this prompt, then delete this file
/masterha/app1/app1.failover.complete [error][/usr/share/perl5/vendor_perl/mha/masterfailover.pm, ln298] Last Failover was did at 2015/01/09 10:00:47. Current time was too early to do failover again. If you want to does failover, manually remove/masterha/app1/app1.failover.complete and run this script again.
1) Check if there are any of the following files, or delete them.
2) after the master-slave switch, the Mhamanager service will automatically stop, and in the Manager_workdir (/MASTERHA/APP1) directory generated file App1.failover.complete, to start MHA, you must first ensure that no this file) If you have this prompt, then delete this file
/masterha/app1/app1.failover.complete [error][/usr/share/perl5/vendor_perl/mha/masterfailover.pm, ln298] Last Failover was did at 2015/01/09 10:00:47. Current time was too early to do failover again. If you want to does failover, manually remove/masterha/app1/app1.failover.complete and run this script again.
1) #ll/masterha/app1/app1.failover.complete
2) # Ll/masterha/app1/app1.failover.error
2) Check MHA copy check: (Need to set Master to Master2 from the server) on master1
(equivalent to the seventh step below the command copy.)
4) Start MHA:
#nohup Masterha_manager--conf=/etc/masterha/app1.cnf &>/tmp/mha_manager.log &
When the slave node is down, the default is not start, plus--ignore_fail_on_start even if a node is down can start MHA, as follows:
#nohup masterha_manager--conf=/etc/masterha/app1.cnf--ignore_fail_on_start &>/tmp/mha_manager.log &
5) Check Status: # Masterha_check_status--CONF=/ETC/MASTERHA/APP1.CNF
6) Check log: #tail-F/masterha/app1/manager.log
7) master-slave switching follow-up work
Refactoring:
Refactoring is your master hangs up, switches to Master2, Master2 becomes the master, so a scheme of refactoring to restore the original repository to a new slave main library after switching, restore the original repository to new from the library, and then re-execute the above 5 steps. When the Master library data file is complete, you can find the last-executed change command in the following ways:
[[email protected] ~]# grep "Change Master to master"/masterha/app1/manager.log | Tail-1 Wed Sep 22:36:41-[info] All other slaves should start replication from here. Statement should be:change MASTER to master_host= ' 192.168.1.103 ', master_port=3306, master_log_file= ' mysql-bin.000009 ', master_log_pos=107, master_user= ' Mharep ', master_password= ' xxx ';
Periodically remove the trunk log in the configuration master-slave replication, the parameter relay_log_purge=0 is set on slave, so the slave node needs to delete the relay log periodically, and it is recommended that each slave node remove the relay log time stagger.
CORNTAB-E 0 5 * * */usr/local/bin/purge_relay_logs--user=root--password=pwd123--port=3306--disable_relay_log_purge >>/var/log/purge_relay.log 2>&1
MySQL High Availability MHA