The purpose of this article is to introduce MySQLCluster, which is a set of MySQL in-memory, real-time, scalable, and highly available versions. Before solving the problem of 0.2 billion queries per second mentioned in the title, we should first perform background information and architecture of the MySQL Cluster.
The purpose of this article is to introduce the MySQL Cluster, which is a set of MySQL in-memory, real-time, scalable, and highly available versions. Before solving the problem of 0.2 billion queries per second mentioned in the title, we will first review the background information and architecture of the MySQL Cluster, which will help you understand the implementation process of the above objectives.
MySQL Cluster introduction
MySQL Cluster is a transaction-type database with scalability, real-time, in-memory, and ACID requirements. it combines 99.999% of high availability with low open-source costs. In terms of design ideas, MySQL Cluster adopts a distributed multi-master architecture to completely eliminate single point of failure. MySQL Cluster can be horizontally extended to commercial hardware. it can carry read and write sensitive workloads through automatic partitioning, and can be accessed through SQL and NoSQL interfaces.
As a solution originally designed as an embedded telecommunications database for implementing Intranet Application carrier-level availability and real-time performance, MySQL Cluster has developed rapidly through the enhancement of many new feature sets, this extends the scope of use cases to Web, mobile, and enterprise-level applications deployed in internal or cloud environments, including large-scale OLTP (real-time analysis) e-commerce, inventory management, shopping cart, payment processing, order tracking, online games, financial transaction and fraud detection, mobile and micropayment, session management and caching, data stream supply, analysis and suggestions, content management and delivery, communication and presentation services, subscription/user configuration management and subsidies, etc.
MySQL Cluster Architecture Overview
Behind the application-oriented transaction process, there are three node types responsible for delivering services to the application. Shows a simple example MySQL Cluster Architecture, which consists of 12 sets of Data nodes divided into six Node groups.
Data NodeIt belongs to the master node in the MySQL Cluster. They provide the following functions: in-memory and disk-based data storage and management, automatic table and user-defined partition (I .e. partition) perform data copy synchronization, transaction and data check, automatic failover, and automatic re-synchronization for self-repair faults between different data nodes.
Various tables are automatically partitioned among multiple data nodes, and each data node acts as the receiving body for a write operation, this allows it to easily distribute write-sensitive workloads on multiple commercial nodes, while ensuring full transparency of applications.
By saving and distributing data to a non-shared architecture, that is, no shared disk is used, and at least data is synchronized to a set of copies, mySQL Cluster ensures that when a single Data Node fails, the user has at least another Data Node that stores the same information. As a result, the request and transaction processing process will continue to provide satisfactory operational results in a non-disruptive manner. Any transaction that cannot be completed properly due to a short failover window (less than seconds) caused by a Data Node failure will be rolled back and re-executed.
We can select a storage method for the data, including storing all the data in the memory or storing a part of the data only on the disk (only for non-index data ). In-memory storage is of great significance for data that requires frequent changes (that is, active working groups. Data stored in the memory is regularly checked to the local disk and coordinated with all Data nodes, in this way, the MySQL Cluster can be fully restored when the overall system fails, for example, power supply is interrupted. Disk-based data can be used to store data with low performance requirements, and such datasets are often larger than available memory space. Like most other database servers, MySQL Cluster uses the page cache mechanism to cache disk-based and frequently accessed Data in the memory of the Data Node, thus increasing its actual performance.
Application Node is responsible for providing connections from Application logic to data nodes. Applications can use SQL to access the database. Specifically, one or more MySQL servers are used to execute the SQL interface function to the stored data in the same MySQL Cluster. In MySQL Server, we can use any standardized MySQL connection mechanism, which means that you have a wealth of access technologies to choose from. In addition, a set of high-performance (based on C ++) interfaces called NDB APIs can be used to implement additional control, improve real-time behavior, and provide better throughput capabilities. The ndb api layer can help extra NoSQL interfaces bypass the SQL layer and directly access the cluster. This not only reduces latency, but also gives developers more flexibility. Existing interfaces include Java, JPA, Memcached, JavaScript with Node. js, and HTTP/REST (implemented through an Apache Module ). All Application nodes can access Data from any Data Node, so even if a fault occurs, they will not cause any service loss-because each application can continue to use other nodes that are still functioning normally.
Management Node is responsible for publishing the cluster configuration scheme to all nodes in the cluster to achieve Node Management. The start time of the Management Node is when the cluster is started, when a Node wants to join the cluster, and when the system is reconfigured. The Management Node can be aborted and restarted without affecting the ongoing operations on Data and Application nodes. By default, the Management Node also provides the ruling service. for example, if a network fault causes "split-brain (split-brain)" or a message cluster begins to divide the network.
Achieve scalability through transparent division
Rows from any given table are split into multiple partitions/fragments in a transparent manner. Each segment contains a separate data node that retains all data content and processes all read and write operations directed to the data. Each data node also has a partner system, which together constitute a node group. The partner node maintains a secondary copy of the data segment, but also has its own primary segment. MySQL Cluster uses the two-step commit protocol to synchronize data, so that after a transaction is committed, the changes are stored in the two data nodes at the same time.
By default, the table's primary key will be used as the partition key, and MySQL Cluster will execute MD5 hash processing on the partition key to select which segment/partition to save. If a transaction or query needs to access data from multiple data nodes, one of the data nodes will act as the transaction coordinator and assign specific work to other related data nodes; the access results will be integrated and finally provided to the application. Please note that we can also allow multiple transactions or queries to access data from multiple partitions and tables-compared with the typical NoSQL that uses the sharding mechanism to save data, this is undoubtedly a significant advantage of MySQL Cluster.
To achieve the optimal (linear) scaling effect, we need to ensure that high-intensity queries/transactions only run on a separate set of data nodes (because this can greatly reduce the network latency caused by inter-data communication ). To achieve this goal, we can allow the application to gain distribution recognition capabilities-specifically, this means that the plan defined by the administrator can cover any column required by the partition key. For example, a table with a composite primary key consisting of the user ID and service name is shown. by selecting the user ID as the partition key, all rows related to a given user in the table are always contained in the same segment. Even more powerful, if we use the same user ID column in other tables and set it as a partition key, all data of the given user in all tables will be stored in the same segment. In other words, the query/transaction pointing to the user will be processed in a single data node.