Performance optimization is an eternal topic for enterprise applications, especially for relational database query optimization. In the core business scenario of the foreground, the class KeyValue query (hereinafter referred to as the class KV query) is very common (for example, SELECT id, name FROM users WHERE id=1002), and the total SQL traffic in the application is very high, for example, days. The KV query of a core business of cats accounts for nearly 90%, the proportion of products in a system is nearly 80%, and the proportion of trading order systems is also about 50%. This phenomenon is quite common in other core business scenarios such as rookie.
This kind of SQL is very simple. If it is very difficult to further optimize at the SQL level, TDDL/DRDS and AliSQL propose a new solution for this kind of scenario.
Before entering the topic, briefly introduce the TDDL/DRDS products. TDDL is a middleware product developed by Alibaba Group in 2008 to solve the single bottleneck of Taobao E-commerce database. The core concept of sub-library table is based on MySQL storage. Solving the problem of data storage and access capacity, the product supports the database traffic of previous Tmall double eleven singles day core transaction links, and gradually grew into the standard of Alibaba Group access relational database.
In 2014, the TDDL team worked with the Alibaba Cloud RDS team to export this product on the cloud, named DRDS (Distributed Relational Database Service), focusing on solving the scalability problem of single-machine relational database. Currently, the product has more than public cloud. 1000 enterprise users, and in the private cloud output, support the core business of many large enterprises and government departments, and with the expansion of business and the progress of industry technology, DRDS products will gradually bring you more efficient and pragmatic distribution. Database functions and solutions.
What is the class KV query optimization for TDDL/DRDS? This starts with looking for new optimization ideas based on MySQL. In 2015, we noticed that the community version of MySQL supports the InnoDB memcached plugin in 5.6. The plugin allows the application's class KV query to go directly to the MySQL InnoDB engine's Buffer by taking the Memcached protocol (both the Memcached protocol and the MySQL SQL protocol can access InnoDB). The same data). This allows the class KV query to bypass the MySQL Server layer's parser, optimizer and executor process, which greatly reduces the MySQL CPU overhead of the application class KV query, expands the database capacity in the similar double eleven singles day extreme scenario, and effectively reduces the database. Response time.
The reason why the class KV query capacity of MySQL Memcahced Plugin can be greatly improved is because the query completely bypasses the various expenses of SQL in the MySQL Server layer, and the query link is shortened extremely. In fact, such optimization ideas are for TDDL. The same applies to /DRDS.
TDDL/DRDS is currently used as the access standard for Alibaba Group's relational database. It shields the underlying horizontal split and the main and standby library technical details for the application. However, it brings a convenient distributed SQL portal to the business at the same time. There are also some. In TDDL/DRDS, each SQL, from the entry to the return result, needs to undergo SQL syntax parsing, query optimization, distributed execution plan generation, and a series of processes such as distributed execution, connection processing, and type processing. A large number of application-side CPUs (TDDL client mode), so if the class KV query can completely bypass the above process during execution, and directly go to the Memcached protocol to check MySQL data, the entire link will be further streamlined, thereby improving the application. Business throughput and DB query capacity.
Along with this optimization, TDDL/DRDS provides KV functionality within the Alibaba Group to deliver optimal performance optimization for such query scenarios.
Pressure test verification effect
In order to specifically verify the actual effect of this optimization of TDDL/DRDS in specific business scenarios, we worked with a core business team of Tmall to verify the SQL and KV traffic switching in this year's double eleven singles day full link pressure test.
KV scene TDDL-KV QPS TDDL-SQL QPS promotion situation Remarks
PK query 17,000 0.75 million PK throughput increase 124% PK type is an integer
UK query 16,000 0.7 million UK throughput increase 131% UK type is a string
Secondary index query 16,000 0.7 million Secondary index throughput increased 132% Average KV result set for each secondary index is 2 rows
In the process of this pressure measurement, the application layer stabilizes the cluster QPS by about 30 w/s through the switch. Then, at time t1, we cut the traffic from the KV protocol back to the SQL protocol. The CPU of the application cluster started to rise from the time t1, and the CPU rose rapidly from 46% to 63%, and then around t2. The service then cuts the traffic back from the SQL to the KV, and the CPU of the application starts to drop. The whole process lasts for 5 minutes. Compared with the traffic of the same QPS before and after the switchover, the KV can save about 17% of the CPU by taking the SQL. Cost savings are obvious in terms of the core applications that count the number of nodes.
In addition, TDDL/DRDS has done a more pure KV benchmark performance test. In a simple KV query scenario, the KV query is more obvious than the SQL protocol throughput improvement because the CPU overhead of the business processing logic is excluded.
In terms of technical principles, the KV query optimization implementation of TDDL/DRDS needs to rely on the features of the MySQL InnoDB Memcached plugin. Currently Alibaba Group AliSQL 5.6 supports this feature based on the open source Memcached plugin code.
In TDDL/DRDS, a KV-like query takes a SQL interface that is essentially different from a KV interface. They use different ports to communicate with MySQL. Therefore, this allows TDDL to maintain two different sets of connection pools internally and to handle two different query links.
Dynamic distributed KV connection pool
In order to ensure the stable and reliable execution of SQL, TDDL/DRDS has precipitated various mature guarantee mechanisms, including FailFast, active/standby switchover, standby database offloading, and connection pool dynamic management. These mechanisms play an irreplaceable role in the stability of TDDL/DRDS.
In order to ensure that the KV optimization function is stable and reliable in the dual 11 core business scenarios, TDDL/DRDS introduces a distributed KV connection pool and dynamic management mechanism.
The core implementation idea of the mechanism is that the KV connection pool manager periodically pulls relevant configuration information, and then checks the configuration information. If changes are found, the KV connection status in the pool is automatically adjusted accordingly, for example, the KV master is completed. Backup switch, standby database offload, replacement of DB machine IP, etc.
TDDL/DRDS adopts such an implementation, in order to ensure that the KV connection pool and the SQL connection pool are independent of each other, and on the other hand, to ensure that changes to the KV connection pool can be coordinated with changes in the SQL connection pool. This way, once the KV connection pool is at risk of stability, the application is allowed to switch traffic back to the SQL connection pool and recover quickly, thus controlling the risk well.
In addition, TDDL/DRDS does some other useful work for stability of the KV function. For example, it supports KV by sub-database grayscale. This feature allows query traffic for a certain sub-database to be between SQL protocol and KV protocol. Dynamic switching to applications is transparent, which is ideal for grayscale verification of traffic in scenarios where TDDL/DRDS manages many data slices.
Optimized KV communication protocol
The query result of the native Memcached protocol uses the "|" symbol to separate the columns of a row record by default. Although this method is simple, the disadvantages are obvious. If the user record contains a string such as "|" or some strange characters are caused by Chinese garbled characters, the transmission of the result of the Memcached protocol will be disordered, resulting in incorrect query results.
In order to solve this problem, TDDL/DRDS is optimized based on the native Memcached protocol and designed a new KV protocol. The new KV protocol adopts a more common communication protocol design scheme. Instead of using a delimiter, it is changed to a fixed-length byte header to describe the length of each column value in a row of records, effectively solving the problem of the native protocol.
The KV protocol itself is very simple. The data packet returned is only the data itself. The protocol overhead is very low. It is not like the SQL protocol. In addition to the data containing the result set, the returned data packet contains a meta-information corresponding to the query result. (such as the data type, column name, alias, table name, and library name of each column, etc.). These Meta information will bring additional CPU overhead and network overhead to the SQL protocol. More seriously, these costs will be amplified in the KV query scenario, because the return result of the KV query is usually 1~2 records, Meta The proportion of packets in the returned packets will increase significantly, which is not suitable for KV query scenarios. Therefore, the KV protocol is more suitable for KV query scenarios, which is one of the reasons why KV queries of TDDL/DRDS can achieve throughput optimization.
Automatic type conversion of KV results
The data obtained by the TDDL/DRDS through the KV protocol is a string type, and the data directly returned to the service string type does not meet the requirements. Therefore, TDDL/DRDS must have the ability to automatically type the string values of the various columns of the query results. At the same time, this type conversion process must strictly follow the MySQL specification to properly adapt the JDBC ResultSet interface specification.
However, the data returned by the KV protocol does not contain the meta information of the column. Therefore, before parsing the KV back result, TDDL/DRDS needs to obtain the meta information related to the table and cache it. In this way, the result can be type converted by Meta in the parsing process.
TDDL/DRDS has not yet exported this feature on Alibaba Cloud public or private cloud products, and as the product evolves, we will slowly open this capability. At the product level, we will use the Plan Cached scheme to further optimize performance, so that the link using SQL to KV is as lossy as using KV directly.