Analysis of MySQL High Availability

Source: Internet
Author: User
Tags failover mysql code

Analysis of MySQL High Availability

For most applications, MySQL is the most critical data storage center. Therefore, we have to deal with the problem of how to make MySQL provide the HA service. When the master is running, we need to think about how to ensure that data is not lost as much as possible. Here, I will talk about the MySQL proxy and toolsets work we have done during this period and the MySQL HA solution we will use in the project at this stage and in the future.

(The question map is from comprendrechoisir.com)

 

Replication

To ensure that MySQL Data is not lost, replication is a good solution, and MySQL also provides a powerful replication mechanism. We only need to know That replication adopts the asynchronous mode for performance consideration, that is, the written data is not synchronized to the slave. If the master is on the machine at this time, we may still face the risk of data loss.

To solve this problem, we can use semi-synchronous replication. The principle of semi-synchronous replication is very simple. When the master completes a transaction, it will not return until at least one slave supporting semi-synchronous confirms that it has received the event and writes it to relay-log. In this way, even if the master is on the machine, at least one slave can obtain the complete data.

However, semi-synchronous does not guarantee that data will not be lost. If the master crashes when completing the transaction and sending it to slave, it may still cause data loss. Compared with traditional asynchronous replication, semi-synchronous replication can greatly improve data security. More importantly, it is not slow. MHA authors all said they used semi-synchronous in facebook's production environment (here ), so I don't really need to worry about its performance, unless your business volume has completely exceeded facebook or google. As mentioned in this article, Loss-Less Semi-Synchronous replication has been used since MySQL 5.7, so the probability of data Loss is very small.

If you really want to ensure that data will not be lost, a better method at this stage is to use gelera, a MySQL cluster solution, which ensures that data will not be lost by writing three copies of the policy at the same time. I have no experience in using gelera, but I know that some companies in the industry have used it in the production environment, and the performance should not be a problem. But gelera is highly invasive to MySQL Code and may not be suitable for some code cleaner students :-)

We can also use drbd to replicate MySQL Data. The MySQL official document provides a detailed description, but I did not use this solution. The author of MHA wrote some questions about using drdb, here, it is for reference only.

In subsequent projects, I will give priority to the semi-synchronous replication solution. If the data is really important, we will consider using gelera.

 

Monitor

We mentioned above that the replication mechanism is used to ensure that the data on the master node is not lost as much as possible, but we cannot wait for several minutes for the master node to know the problem. Therefore, a set of good monitoring tools is essential.

When the master node is down, the monitor can quickly detect and perform subsequent processing, such as notifying the Administrator by email or notifying the daemon to quickly perform failover.

Generally, we use keepalived or heartbeat to monitor a service, so that when the master is on the machine, we can easily switch to the slave machine. However, they still cannot immediately detect service unavailability. My company currently uses the keepalived method, but later I prefer to use zookeeper to solve the monitor and failover of the entire MySQL cluster.

For any MySQL instance, we have a corresponding agent program. The agent and the MySQL instance are placed on the same machine, and the ping command is periodically sent to the MySQL instance to check their availability, at the same time, the agent is mounted to zookeeper through ephemeral. In this way, we can know whether MySQL is a machine, mainly in the following situations:

  1. When the machine is on the machine, MySQL and agent will be down, and the connection between the agent and zookeeper will naturally be disconnected.
  2. When MySQL fails, the agent finds that the ping is disconnected and actively disconnects from zookeeper.
  3. Agent is disabled, but MySQL is not

In the above three cases, we can all think that there is a problem with the MySQL machine, and zookeeper can immediately perceive it. The agent is disconnected from zookeeper. zookeeper triggers the corresponding children changed event, and the Control Service that monitors the event can handle it accordingly. For example, in the first two cases, the Control Service can automatically perform a failover, but in the third case, it may not be processed, wait for related services such as crontab or supersivord on the machine to automatically restart the agent.

The advantage of zookeeper is that it can easily monitor the entire cluster, and can instantly obtain the change information of the entire cluster and trigger the services interested in corresponding Event Notifications, coordinate multiple services for related processing at the same time. However, keepalived or heartbeat cannot or is too troublesome.

The problem with zookeeper is that it is complicated to deploy, and if failover is performed, it is also troublesome to obtain the latest database address for the application.

For deployment problems, we need to ensure that a MySQL instance works with an agent. Fortunately, with docker, we are very simple. Zookeeper is not used to change the address of the second database. we can notify the application to dynamically update the configuration information, VIP, or use proxy to solve the problem.

Although zookeeper has many advantages, zookeeper may not be the best choice if your business is not complex, such as only one master and one slave. Maybe keepalived is enough.

 

Failover

Through monitor, we can easily perform MySQL monitoring, and notify the corresponding service to perform failover processing after MySQL is on the machine. Suppose there is such a MySQL cluster, a is the master, B and c are their slave. When a crashes, we need to perform a failover. So which one of B and c is selected as the new master?

The principle is very simple. Which slave has the most recent original master data, choose which one as the new master. We can useshow slave statusThis command is used to know which slave has the latest data. We only need to compare two key fieldsMaster_Log_FileAndRead_Master_Log_PosThe two values indicate the location of the binlog file read from the master node by the slave. The larger the index value of the binlog, and the larger the pos, the higher the slave can be promoted to the master node. We will not discuss the possibility that multiple slave instances will be upgraded to the master node.

In the previous example, if B is promoted to master, we need to re-point c to the new master B to start replication. We passCHANGE MASTER TOTo reset the c master, but how do we know which file from the binlog of B and which position to Start copying?

GTID

To solve this problem, MySQL 5.6 introduced the concept of GTID, that is, uuid: gid. uuid is the uuid of MySQL server, which is globally unique, gid is an incremental transaction id. With these two items, we can uniquely identify a transaction recorded in the binlog. With GTID, we can easily perform failover processing.

In the preceding example, assume that the last GTID of a read by B is3E11FA47-71CA-11E1-9E33-C80AA9429562:23And c is3E11FA47-71CA-11E1-9E33-C80AA9429562:15When c points to the new master B, we can know through GTID that as long as the GTID is found in the binlog of B3E11FA47-71CA-11E1-9E33-C80AA9429562:15This event, then c Can Start copying from its next event location. Although the method of searching binlog is still sequential, it is a little inefficient and violent, but it is much easier to guess which filename and position it is.

Google has also installed a Global Transaction ID patch for a long time, but it only uses an incremental integer. LedisDB uses its ideas to implement failover, however, google seems to be migrating to MariaDB gradually now.

MariaDB's GTID implementation is different from MySQL 5.6, which is actually troublesome. For my MySQL tool set go-mysql, this means you need to write two sets of different codes to handle GTID. Check whether MariaDB is supported in the future.

Pseudo GTID

Although GTID is a good thing, it is only limited to MySQL 5.6 +. Currently, most of the businesses are still using versions earlier than MySQL 5.6, and my company is 5.5, these databases will not be upgraded to 5.6 for at least a long time. Therefore, we still need a good mechanism to select the filename and position of the master binlog.

At first, the author intends to study the implementation of MHA. It adopts the first method to copy the relay log to supplement the missing event, but the author does not trust relay log. In addition, MHA uses perl, I gave up my research because of a language that I couldn't fully understand.

Fortunately, I met the orchestrator project, which is really a magical project. It adopts a Pseudo-Do GTID method, and the core code is this.

  1. create database ifnot exists meta;
  2. drop eventif exists meta.create_pseudo_gtid_view_event;
  3. delimiter ;;
  4. create eventifnot exists
  5. meta.create_pseudo_gtid_view_event
  6. on schedule every 10 second starts current_timestamp
  7. on completion preserve
  8. enable
  9. do
  10. begin
  11. set@pseudo_gtid:= uuid();
  12. set@_create_statement:= concat('create or replace view meta.pseudo_gtid_view as select \'',@pseudo_gtid,'\' as pseudo_gtid_unique_val from dual');
  13. PREPARE st FROM @_create_statement;
  14. EXECUTE st;
  15. DEALLOCATE PREPARE st;
  16. end
  17. ;;
  18. delimiter ;
  19. setglobal event_scheduler :=1;

It creates an event on MySQL and writes a uuid to a view every 10 s, which is recorded in the binlog, although we still cannot directly locate an event like GTID, we can also locate a 10 s range, so that we can compare the binlogs of two MySQL databases in a very small range.

In the above example, assume that the last occurrence of uuid in c is s1. In B, we find the uuid at s2 and compare the subsequent events in sequence. If they are inconsistent, the replication may be stopped. After traversing the last binlog event of c, we can get the filename and position corresponding to the next event of B, and then let c point to this position to Start copying.

Use the Pseudo GTID to enable the slave function.log-slave-updateBecause GTID must also enable this option, I personally feel completely acceptable.

In the future, the failover tool implemented by the author will adopt this Pseudo do GTID method.

In MySQL High Availability, the author uses another GTID method. Each time a commit is committed, gtid must be recorded in a table, then we can find the corresponding location information through the gtid, but this method requires the support of the Business MySQL client. I do not like it very much and will not use it anymore.

 

Postscript

MySQL HA has always been a relatively deep field. I only listed some of the latest research items, and some related tools will be implemented in go-mysql as much as possible.

 

Update

After a period of thinking and research, I have gained a lot of experience and gains. The design of MySQL HA is different from the previous ones. Later, I found that the self-designed HA solution is almost the same as facebook's article. In addition, I recently chatted with facebook people and heard that they are also vigorously implementing it. So I feel that I am in the right direction.

The new HA, I will fully embrace GTID, compared to the emergence of this thing is to solve the original replication of the pile of problems, so I will not consider non-GTID of the lower version of MySQL. Fortunately, our project has upgraded all MySQL to 5.6, fully supporting GTID.

Unlike the article in fb, which transformed mysqlbinlog to support the semi-sync replication protocol, I used the replication database of go-mysql to support the semi-sync replication protocol, in this way, the binlog of MySQL can be synchronized to a machine in real time. This may be the only difference between the fb solution and I.

Only the binlog synchronization speed is definitely faster than the native slave. After all, the process of executing the event in the binlog is shorter. In addition, we still use the original synchronization method for the real slaves, do not use semi-sync replication. Then we use MHA to monitor the entire cluster and handle failover.

I used to think that MHA is not easy to understand, but it is actually a very powerful tool. In addition, I still know what perl really means. MHA has been used by many companies in the production environment and has undergone tests. Using it directly is definitely more cost-effective than writing it yourself. So in the future, I will not consider zookeeper, but I will consider writing the agent myself.

This article permanently updates the link address:

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.