This series analyzes OpenStack's high availability (HA) concepts and solutions:
(1) Overview of OpenStack high-availability scenarios
(2) Neutron L3 Agent HA-VRRP (Virtual Routing Redundancy Protocol)
(3) Neutron L3 Agent ha-dvr (Distributed virtual machine router)
(4) RabbitMQ HA
(5) MySQL HA
1. MySQL HA Scenario Summary
There are many kinds of Mysql HA scenarios, including:
- heartbeat+brdb:http://lin128.blog.51cto.com/407924/279411 http://www.centos.bz/2012/03/ achieve-drbd-high-availability-with-heartbeat/
- Cluster (using NDB engine): http://database.51cto.com/art/201008/218326.htm
- Double master+keeplived:http://database.51cto.com/art/201012/237204.htm,http://kb.cnblogs.com/page/83944/
- Double master:http://yunnick.iteye.com/blog/1845301
- Oracel Fabric Solution: http://www.csdn.net/article/2014-08-20/2821300
M these highly available scenarios are mostly deployed on the basis of the following:
- Based on master-slave replication;
- Based on the Galera protocol;
- Based on the NDB engine;
- Based on middleware/proxy;
- Based on shared storage;
- Host-based high availability;
Here's a comparison of a variety of scenarios:
Among these options, the most common is based on the master-slave replication scheme, followed by the Galera-based scenario. This article shares various high-availability technologies in MySQL to comprehensively and concretely analyze the various disaster-tolerant scenarios of MySQL. It is also visible that Mysql disaster tolerant water is very deep.
2 a/p scheme using Pacemaker + DRBD + CoroSync
Similar to the RabbitMQ ha scenario, OpenStack's official recommended Mysql active/passive HA scenario is also Pacemaker + DRBD + CoroSync. Specific scenarios are:
- Configuring DRBD for Mysql
- Configuring the Var/lib/mysql directory for Mysql on a DRBD device
- Select and configure a VIP to configure Mysql to listen on this IP
- Use Pacemaker to manage all of Mysql's resources, including its deamon
- Configure the OpenStack service to use a VIP-based Mysql connection
The effect of OpenStack's officially recommended Mysql HA a/p scenario Configuration:
This document describes the specific configuration steps in detail. The problem with this scheme is that DRBD is prone to brain fissures, and that only one of the two MySQL nodes can provide services, and there is a waste of resources.
3. Active/active Multi-master Scenario 3.1 Three-node scenario using the Galera protocol
The official OpenStack recommendation is to use Galera to do three node ha. In this mode, Galera provides synchronous replication between multiple MySQL nodes, allowing multiple MySQL nodes to serve at the same time, often using load balancer software such as HAProxy to provide a VIP for each application. Official website is here.
Galera main functions:
- Synchronous replication
- True multi-master that all nodes can read and write to the database at the same time
- Automatic node member control, failed nodes are automatically cleared
- New node joins data automatic replication
- True parallel replication, row-level
- Users can connect directly to the cluster, using the same feeling on MySQL exactly
- Because it is multi-master, there is no delay
- There is no loss of transactions
- Ability to expand both read and write
- Smaller client latency
- Data between nodes is synchronous, and Master/slave mode is asynchronous, and binlog on different slave may be different
- The replication capabilities of the Galera cluster are based on the Galera library, and in order to enable MySQL to communicate with the Galera library, the Wsrep API has been developed specifically for MySQL.
The detailed configuration process can refer to the OpenStack HA guide and this article.
3.2 Two-node + galera Arbitrator a/a HA Scenario
The a/a scenario in 3.1 requires three nodes, so the cost is higher. This scenario provides a a/a scenario that uses two nodes. Believe the information can refer to this article.
The scheme uses arbitrator as the third node to use, it is actually a daemon. It has two functions:
- When you use an even number of nodes, it can be used as an odd node to prevent brain fissures from occurring.
- It can be used as a snapshot of a continuous system state for backup purposes.
Of course, MariaDB Galera Cluster is not suitable for all situations that need to be duplicated, you have to decide according to your own needs, for example,
- If you are data consistency considerations, and write operations and updates more things, but the amount of write is not very large, MariaDB Galera cluster is right for you. However, the write throughput of the entire cluster in this scenario is limited by the weakest node, and if one node becomes slow, the entire cluster will be slow. For stable high-performance requirements, all nodes should use unified hardware, and cluster nodes recommend a minimum of 3.
- If you are query more, and read and write separation is easy to achieve, then use replication good, easy to use, with a master to ensure consistency of data, you can have multiple slave to read data, load sharing, as long as the data consistency and uniqueness can be resolved, Replication is more suitable for you, after all, MARIADB galera cluster cluster follow the "barrel" principle, if the amount of writing is very large, the data synchronization speed is determined by the lowest IO node in the cluster node, the overall, the speed of writing will be much slower than replication.
Understanding OpenStack High Availability (HA) (5): MySQL HA