Detailed description of the query planner in SQLite, detailed description of the SQLite Planner

Source: Internet
Author: User
Tags natural logarithm sqlite query

Detailed description of the query planner in SQLite, detailed description of the SQLite Planner

1.0 Introduction

The task of the query planner is to find the best algorithm or "query plan" to complete an SQL statement. As early as SQLite 3.8.0, the component of the query planner has been rewritten so that it can run faster and generate better query plans. This type of rewrite is called "Next Generation query planner" or "NGQP ".

This article re-outlines the importance of query planning, raises some inherent problems of query planning, and summarizes how NGQP solves these problems.

We know that NGQP (the next generation query planner) is almost always better than the earlier query planner. However, some applications may be unconsciously dependent on some uncertain or bad features in the earlier version of the query planner. At this time, the query planner is updated to NGQP, these applications may cause program crash. NGQP must consider this risk and provide a series of check items to reduce risks and solve possible problems.

Follow this document on NGOP. For more general sqlite query planner and an overview covering the entire history of sqlite, see "sqlite query optimizer overview ".
2.0 background

For queries to a single table using a few simple indexes, there is usually an obvious optimal algorithm choice. However, for larger and more complex queries, such as multi-channel connections between numerous indexes and subqueries, there may be hundreds, thousands, or millions of reasonable algorithms for computing results. In this way, you can select a single "best" query plan with many possibilities.

The query planner makes the SQL database engine so incredibly useful and powerful. (This is the real SQL database engine, not just sqlite .) The query planner allows programmers to release a specific query plan from the task. This allows programmers to focus more on psychological energy in higher-level application issues and to provide more value to end users. For simple queries, the selection of query plans is obvious, although convenient, but not very important. However, as an application, the architecture and query will become increasingly complicated. A smart query plan can greatly accelerate and simplify application development. It tells the database engine what content needs to be incredibly powerful, and then let the database engine find the best way to retrieve that content.

Writing a good query planner is more artistic than scientific. The query planner must have incomplete information. It cannot decide how long it will take to take any special plan, but does not actually need to run it. Therefore, when comparing two or more plans, find out which are the "best" and the query planner will make some assumptions and guesses, which may sometimes cause errors. A good query plan requires a correct solution, which is rarely considered by programmers.
2.1 query planner in sqlite

Sqlite computing uses nested loop join. Each mark IN a loop is connected (additional loops may insert the IN and OR operators IN the WHERE sentence. Sqlite thinks that there are too many considerations, but we can ignore them in this article for the sake of simplicity .) During each cycle, one or more indexes are used and accelerated, or one cycle may be "full table scan" to read each row in the table. Therefore, the query plan is divided into two subtasks:

  • The nested sequence of various cycles for harvesting.
  • Select a good index for each cycle.

The sorting nesting order is generally more challenging.

Once the nested order of the connection is established, the choice of each cycle index is usually obvious.

2.2 SQLite query planner stability assurance

For any given SQL statement, SQLite usually chooses the same query plan. Suppose:

  • The schema of the database has not changed significantly, for example, adding or deleting indexes (indices ),
  • The ANALYZE command does not return
  • SQLite does not use SQLITE_ENABLE_STAT4 or SQLITE_ENABLE_STAT4 during compilation, and
  • Use SQLite of the same version

The stability guarantee of SQLite means that you can efficiently run the query operation in the test, and your application has not changed the schema, so SQLite will not suddenly choose to start using a different query plan, this may cause performance problems after you publish your application to the user. If your application works in the lab, it can also work after deployment.

Enterprise-level Client/Server SQL databases cannot be guaranteed. In the Client/Server SQL database engine, the server tracks the size of the statistical table and the number of indexes (indices). The query planner selects the optimal plan based on the statistical information. Once the content in the database is changed through addition, deletion, and modification, the change in statistical information may lead to some specific queries. The query planner uses different query plans. Generally, the new plan is better for the changed data structure. However, a new query plan may degrade the performance. When using the Client/Server database engine, there is usually a Database Administrator (DBA) to handle these rare issues. However, DBAs cannot fix this issue in Embedded databases such as SQLite. Therefore, SQLite must be careful to ensure that the query plan will not be accidentally changed after deployment.

SQLite stability guarantee is applicable to traditional query planning and NGQP.

Note that the SQLite version may change the query plan. SQLite of the same version usually chooses the same query plan, But if you reconnect your application to SQLite of different versions, the query plan may change. In rare cases, changes to the SQLite version may cause performance degradation. This is the reason why you should consider statically connecting your application to SQLite instead of using a system-wide SQLite shared library, because it may change without your knowledge or control.
3.0 a tricky situation

"TPC-H Q8" is a test query from Transaction Processing Performance councer. The query planner did not select a good plan for TPC-H Q8 in 3.7.17 and earlier versions of SQLite. And how to adjust the traditional query planner cannot fix this problem. To find a good solution for TPC-H Q8 queries and continuously improve the quality of the SQLite query planner, it is necessary to redesign the query planner. This section will explain why redesign is necessary, what is the difference between NGQP and trying to solve the TPC-H Q8 problem.

3.1 query details

TPC-H Q8 is an 8-way join. Based on what we can see above, the main task of the query planner is to determine the best nesting order of the eight cycles, so as to minimize the workload for completing the join operation. Is a simple model of the TPC-H Q8 example:

In this chart, the eight tables in the from clause section of the query statement are displayed as a large circle and marked by N2, S, L, P, o, C, N1, and R. The arc in the figure represents the estimated overhead of the table that calculates the starting point of the arc for the outer join. For example, the overhead of connecting L within S is 2.30, and the overhead of connecting L outside S is 9.17.

The "resource consumption" here is calculated by logarithm. Because loops are nested, the total resource consumption is multiplied, rather than the sum. We usually think that a graph carries a weight that needs to be accumulated. However, the graph here shows the value after the logarithm of various resource consumption. It is shown that S consumes about 6.87 less internally. After conversion, the query in which S loop is located in L loop is about 963 times faster than the query in which S loop is located outside L loop.

The arrow starting from the small circle marked as "*" indicates the resources consumed to run each cycle separately. The External Loop must consume "*" resources. You can select "*" to consume resources in the internal loop, or select one of the other items as the resources consumed by the External Loop to achieve the lowest resource consumption. You can regard the resource consumed by "*" as a short representation of multiple arcs from any of the other nodes in the figure to the current node. Therefore, such a graph is "complete", that is to say, each pair of nodes in the graph has an arc in two directions (some are obvious and some are hidden ).

Finding the optimal query plan is equivalent to finding the minimum consumption path for accessing each node only once in the graph.

(Note: The evaluation of resource consumption for the TPC-H Q8 percentile is calculated by the query planner in SQLite 3.7.16 and obtained using natural logarithm conversion .)

3.2 complexity

The query planner problem shown above is simplified. Resource consumption can be estimated. We can only know the actual resource consumption of running this loop after we actually run the loop. SQLite estimates the resource consumption of running cycles Based on the constraints of the WHERE clause and the indexes that can be used. Such an estimate is usually, but sometimes the estimation result is out of reality. Using the ANALYZE command to collect other statistical information about the database sometimes makes SQLite evaluate the consumed resources more accurate.

The resources consumed are composed of multiple numbers, rather than just a single number. SQLite calculates several different evaluated resources for each cycle at different stages. For example, the "initialization" resource consumption only occurs when the query is started. The resources consumed during initialization are the resources consumed for Automatic Indexing of non-indexed tables. Next, the resources consumed by each step of the running cycle. The number of rows automatically generated by the evaluation cycle. The number of rows is required to evaluate the resources consumed by the internal cycle. If the query contains an order by clause, the resources consumed BY sorting should also be considered.

The dependencies in common queries are not necessarily in a separate loop, so the dependent model may not be represented by graphs. For example, one of the constraints of the WHERE clause may be S. a = L. B + P. c. This implies that the S loop must be an internal loop of L and P. Such dependencies cannot be expressed in graphs, because there is no way to draw an arc starting from two or more nodes at the same time.

If the query contains an order by clause or group by clause, or the query uses the DISTINCT keyword, the rows are automatically sorted to form a graph, selecting the path to traverse this graph is especially convenient, so you do not need to sort it separately. The automatic deletion of the order by clause can greatly change the performance. Therefore, it is also necessary to complete the implementation of the scheduler.

In a TPC-H Q8 query, all initialization resource consumption is negligible, each node has a dependency before, and there is no order by, group by or DISTINCT clause. Therefore, for TPC-H Q8, it is enough to indicate what computing resources consume. The common query may involve many other complex situations. To clearly illustrate the problem, the subsequent sections of this article ignore many factors that complicate the problem.

3.3 find the best query plan

Prior to version 3.8.0, SQLite used the "Nearest Neighbor" or "NN" test method to find the optimal query plan. The NN Testing Method traverses the graph separately and always selects the arc with the lowest consumption as the next step. In most cases, NN testing runs very well. Moreover, NN testing is also very fast, so SQLite can quickly find a good plan even when it reaches 64 connections. In contrast, other database engines that can run a larger number of searches will stop when the number of tables in the same connection is greater than 10 or 15.

Unfortunately, the NN test method is not optimal for the query plan calculated by the TPC-H q8. The planning calculated by NN testing method is R-N1-N2-S-C-O-L-P, and its resource consumption is 36.92. The previous sentence means that the R table runs in the outermost loop, N1 is in the followed internal loop, N2 is in the third loop, and so on to P, which is in the innermost loop. The shortest path that traverses this graph (obtained by the exhaustive search) is a P-L-O-C-N1-R-S-N2, at which point the resource consumption is 27.38. The difference does not seem to be big. However, remember that the resources consumed are calculated by logarithm, so the shortest path is almost 750 times faster than the path obtained by NN testing.

One solution to this problem is to change SQLite so that it can use the exhaustive search to obtain the optimal path. However, the time required for the exhaustive search is proportional to K! (K is the number of tables involved in the connection). Therefore, when there are more than 10 connections, it takes a lot of time to run sqlite3_prepare.
3.4 N recent neighbor testing methods or "N3" testing methods

The next generation query planner uses the new test method to find the optimal path of the traversal graph: "N nearest neighbor test method" (later called "N3 "). Using N3, each step is not just a recent neighbor. The N3 algorithm tracks N optimal paths in each step, where N is a small integer.

Assuming N = 4, for the TPC-H Q8 diagram, the first step is to find the four shortest paths that can access any single node:
R (cost: 1, 3.56)
N1 (cost: 5.52)
N2 (cost: 5.52)
P (cost: 1, 7.71)

Step 2: Find the four shortest paths that can access two nodes starting from one of the four Shortest Paths found in the previous step. In this case, two or more paths are acceptable (such paths have the same accessible node set, which may be in different order ), you only need to keep the path in the first step and the path with the lowest resource consumption. Find the following path:
 
R-N1 (cost: 7.03)
R-N2 (cost: 9.08)
N2-N1 (cost: 11.04)
R-P {cost: 11.27}

Step 3: Start with four Shortest Paths to access two nodes and find the four Shortest Paths to access the three nodes:
 
R-N1-N2 (cost: 12.55)
R-N1-C (cost: 13.43)
R-N1-P (cost: 14.74)
R-N2-S (cost: 15.08)

And so on. The TPC-H Q8 query has eight nodes, so the process repeats eight times in total. In the case of K connections, the storage complexity is O (N), and the computing time complexity is O (K * N) the solution is much faster.

However, which value does N choose? You can try N = K. At this time, the complexity of this algorithm is O (K2). In fact, it is still very fast, because the maximum K value is 64, and K rarely exceeds 10. But this is still not enough to solve the TPC-H Q8 problem. If the TPC-H Q8 query is N = 8, the query plan obtained by the N3 algorithm is R-N1-C-O-L-S-N2-P, and the resource consumption is 29.78. This greatly improves the NN algorithm, but it is still not the best. When N = 10 or greater, N3 can find the best query plan for the TPC-H Q8 query.

The first implementation of the Next Generation query plan selects N = 1 for simple query, N = 5 for two connections, and N = 10 for three or more table connections. The N-worthy selection rules may be changed for subsequent releases.

4.0 risks of upgrading to the next generation query Planner

For most applications, upgrading from the old query planner to the next generation query planner does not require much or requires a lot of effort. Simply replace the old SQLite version with a newer SQLite version and recompile it. Running the application will be much faster. You do not need to modify or modify APIs in complex processes.

However, upgrading to the next-generation query planner may cause a small risk of performance degradation, just like changing other schedulers. This problem does not occur because the next generation query planner is incorrect, or has many vulnerabilities, or is worse than the old query planner. If the index information is determined, the next generation query planner can always choose a better or better plan than the previous one. The problem is that some applications may use low-quality, few selective indexes and cannot run ANALYZE. The old query planner views a lot of queries but has fewer possible implementations than the current one. Therefore, if you are lucky, such a scheduler may have a good plan. On the other hand, the next generation query planner views more query planning implementations, So theoretically, it may choose another query planner with better performance. If the index runs well at this time, in actual operation, the performance is decreased, which may be caused by the distribution of data.

Technical points:

  • As long as the next generation query planner can access the accurate ANALYZE data in the SQLITE STAT1 file, it can always find the query planner that is equivalent to the previous query planner or has better performance.
  • As long as the query mode does not contain an index with the leftmost field having the same value and more than 10 or 20 rows, the next generation query planner can always find a good query plan.

Not all applications meet the preceding conditions. Fortunately, even if these conditions are not met, the next-generation query planner can still find a good query plan. However, performance degradation may also occur (but few ).

4.1 sample analysis: Upgrade Fossil to use the next generation query Gauge

Fossil DVCS is a version control system used to track all SQLite source code. The Fossil software repository is an SQLite database file. (As a separate exercise, we invite readers to think deeply about this recursive application query planner .) Fossil is both a version control system of SQLite and a testing platform of SQLite. No matter when SQLite is reinforced, Fossil is one of the first applications to test and evaluate these enhancements. Therefore, Fossil first uses the next generation query planner.

Unfortunately, the next generation of query planner causes Fossil performance degradation.

Fossil uses many reports, one of which is the change schedule on a single branch. It shows all the merge and delete operations on this branch. View http://www.sqlite.org/src/timeline? Nd & n = 200 & r = trunk, you can see a typical example of the time report. Generally, it takes only a few milliseconds to generate such a report. However, after upgrading to the next generation query planner, we found that it would take nearly 10 seconds to generate such a report for the trunk branch of the repository.


The core query used to generate the branch schedule is as follows. (We do not expect readers to understand the details of this query. The following is a description .)
 

SELECT   blob.rid AS blobRid,   uuid AS uuid,   datetime(event.mtime,'localtime') AS timestamp,   coalesce(ecomment, comment) AS comment,   coalesce(euser, user) AS user,   blob.rid IN leaf AS leaf,   bgcolor AS bgColor,   event.type AS eventType,   (SELECT group_concat(substr(tagname,5), ', ')    FROM tag, tagxref    WHERE tagname GLOB 'sym-*'     AND tag.tagid=tagxref.tagid     AND tagxref.rid=blob.rid     AND tagxref.tagtype>0) AS tags,   tagid AS tagid,   brief AS brief,   event.mtime AS mtime FROM event CROSS JOIN blob WHERE blob.rid=event.objid  AND (EXISTS(SELECT 1 FROM tagxref        WHERE tagid=11 AND tagtype>0 AND rid=blob.rid)    OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid          WHERE tagid=11 AND tagtype>0 AND pid=blob.rid)    OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=pid          WHERE tagid=11 AND tagtype>0 AND cid=blob.rid)) ORDER BY event.mtime DESC LIMIT 200;

This query is not very complex, but even so, it can replace hundreds of lines, maybe thousands of lines of processing code. The main point of this query is to scan the EVENT table down to find the latest 200 commit records that meet any of the following three conditions:

  1. This submission contains the "trunk" label.
  2. The subcommit contains the "trunk" label.
  3. This submission has a parent submission containing the "trunk" tag.

The first condition shows the submission of all trunk branches. The second and third conditions include the submission that is merged to the trunk branch or generated by the trunk branch. These three conditions are achieved by concatenating three EXISTS statements with OR in the WHERE clause of this query. The performance degradation caused by the use of the Next Generation query planner is produced by the second and third conditions. The problems in the two conditions are the same, so we only look at the second condition. The subquery of the second condition can be rewritten to the following statement (which simplifies the secondary and unimportant statements ):
 

SELECT 1 FROM plink JOIN tagxref ON tagxref.rid=plink.cid WHERE tagxref.tagid=$trunk  AND plink.pid=$ckid;

The PLINK table stores the parent-child relationship between each commit. The TAGXREF table maps tags to the submission. As a reference, the relevant sections of the query mode for these two tables are shown as follows:
 

CREATE TABLE plink( pid INTEGER REFERENCES blob, cid INTEGER REFERENCES blob);CREATE UNIQUE INDEX plink_i1 ON plink(pid,cid); CREATE TABLE tagxref( tagid INTEGER REFERENCES tag, mtime TIMESTAMP, rid INTEGER REFERENCE blob, UNIQUE(rid, tagid));CREATE INDEX tagxref_i1 ON tagxref(tagid, mtime);


There are only two methods to implement such a query. (Of course there may be many other algorithms, but none of them is a competitor to the "best" algorithm.

  • Search for all subcommits that submit $ ckid, and test each subcommit to see if there are tags that contain $ trunk.
  • Search for all commits that contain the $ trunk tag, and test each of these commits to see if there are subcommits submitted by $ ckid.

Intuitively, we humans think that the first algorithm is the best choice. Each commit may have several subcommits (one of which is most commonly used .), Then perform a test on each subcommit and use the logarithm operation to calculate the time when the $ trunk tag is found. In fact, algorithm 1 is indeed fast. However, the next-generation query planner does not use the best options intuitively. The next generation of query planner must have selected a rare algorithm, and algorithm 2 is relatively difficult in mathematics. This is because the next generation query planner must assume that the PLINK_I1 and TAGXREF_I1 indexes have the same quality and selectivity without additional information. Algorithm 2 uses a field indexed by TAGXREF_I1 and two indexed fields by PLINK_I1. Algorithm 1 only uses the first field of the two indexes. Because algorithm 2 uses the index of multiple fields, the next-generation query planner correctly determines it as a good algorithm of the two algorithms based on its own standards. The time spent by the two algorithms is very close. algorithm 2 is barely ahead of algorithm 1. However, in this case, it is true to select algorithm 2.


Unfortunately, in practice, algorithm 2 is slower than algorithm 1.

This problem occurs because the index is not of the same quality. One commit may have only one subcommit. In this way, the first field of the PLINK_I1 index is usually reduced to search for a row. However, because thousands of submissions contain the "trunk" tag, the first field of TAGXREF_I1 does not greatly help scale down the search.

The next generation query planner is unable to know that TAGXREF_I1 has almost no use in such queries unless you run ANALYZE on the database. The ANALYZE command collects the quality statistics of each index and stores the statistics in the SQLITE_STAT1 table. If the next generation query planner can access these statistics, it is very easy to choose algorithm 1 as the best algorithm.


Didn't the old query planner select algorithm 2? Simple: NN algorithms have never even considered algorithm 2. This type of planning problems are illustrated as follows:

In the case of "not running ANALYZE" as shown in the left figure, NN algorithm chooses loop P9PLINK) as the External Loop, because 4.9 is smaller than 5.2, the result is to select the P-T path, that is, algorithm 1. The NN algorithm only finds an Optimal Path in each step. Therefore, it ignores the fact that 5.2 + 4.4 is slightly better than 4.9 + 4.8. However, the N3 algorithm traces five optimal paths to two connections, so it finally chooses the T-P path because the overall resource consumption of this path is less. Path T-P is algorithm 2.

NOTE: If ANALYZE is run, the evaluation of resource consumption is closer to reality, so that NN and N3 both choose algorithm 1.

(Note: In the latest two charts, the resource consumption evaluation is calculated by the next generation query planner using a base-2 logarithm algorithm, compared with the old query planner, the resource consumption is slightly different. Therefore, the resource consumption assessment in the last two graphs cannot be compared with the resource consumption assessment in the TPC-H Q8 regression .)

4.2 problem fixes

Running ANALYZE on the Resource Warehouse database can immediately fix such performance problems. However, whether or not we analyze the resource warehouse, we require the Fossil to be very strong and always be able to run quickly. For this reason, we modify the query to use the cross join operator instead of the commonly used JOIN operator. SQLite will not reorder the tables connected by cross join. This feature is a long-term feature of SQLite. This special design allows experienced developers to force SQLite to execute a specific nested loop order. Once a connection is changed to a connection (with a keyword added) such as cross join, the next generation query planner forces algorithm 1 To be a little faster regardless of whether or not to use ANALYZE to collect statistics.

We say algorithm 1 is faster. But strictly speaking, this is not true. Running algorithm 1 on a common storage warehouse is faster. However, it is possible to build such a resource warehouse: each submission to a resource warehouse is submitted to a branch with a unique name, and all submissions are root subcommits. In this case, TAGXREF_I1 has more options than PLINK_I1, and algorithm 2 is actually faster. However, in reality, such a resource warehouse is very unlikely, so using the cross join syntax to hard encode the order of nested loops is a suitable solution to solve the problem in this case.

5.0 list of methods for avoiding or correcting query planner Problems

Don't panic! The planning of the query planner selection difference is actually very rare. You may not encounter such problems in applications. If you don't have performance problems, you don't have to worry about it.

Create a correct index. Most SQL Performance problems are not caused by query planner problems, but by the lack of appropriate indexes. Make sure that the index can promote all large queries. Most performance problems can be solved using one or two create index commands without modifying the application code.

Avoid creating low-quality indexes. A low-quality index is an index that contains more than 10 or 20 rows of rows with the same value as the leftmost field of the index in the table. Pay special attention to avoid using a Boolean field or an "enumeration type" field as the leftmost field of the index.

The Fossil performance problem mentioned in the previous section of this article is because the leftmost sub-segment (TAGID field) of the TAGXREF_I1 index of the TAGXREF table has the same worth of more than 10 thousand.

If you must use a low-quality index, you must run ANALYZE. As long as the query planner knows that the index is of low quality, the index with low quality will not confuse it. The query planner knows that the low-quality index is implemented through the content of the SQLITE_STAT1 table, which indicates that the ANALYZE command is computed.

Of course, ANALYZE can run efficiently only when the database has a large amount of content at the beginning. When you want to create a database and accumulate a large amount of data, you can run the "ANALYZE sqlite_master" command to create the SQLITE_STAT1 table, and then (use common INSERT statements) enter the content in the SQLITE_STAT1 table to indicate that such a database is suitable for your application-maybe this content is obtained after you run the ANALYZE command on a very perfect template database in the lab..

Write your own code. The addition allows you to quickly and easily know which queries take a lot of time, so that you can only run which queries do not need to take too long.

If the query may use low-quality indexes on databases without running analysis, use the cross join syntax to forcibly use specific nested cyclic order. SQLite performs special processing on the cross join operator, which forces the External Loop of the Left table to the right table.

If there are other methods to achieve this, avoid doing so, because it is in conflict with the powerful advantages of any SQL language concept, especially the application developers do not need to understand query planning. If you use cross join, you should do the same until the end of the development cycle, and carefully explain how to use cross join in the annotations, in this way, it is possible to remove it later. Avoid using cross join in the early stages of the development cycle, because this is an immature optimization measure, that is, the well-known source of evil.

The "+" operator is used to cancel some restrictions of the WHERE clause. When a higher quality index can be used for a specific query, if the query planner still insists on selecting the index with poor quality, therefore, exercise caution when using the single-object operator "+" in the WHERE clause. In this way, the query planner can force the query planner to not use poor-quality indexes. If possible, add this operator as carefully as possible, and avoid using this operator especially early in the application development cycle. Note: adding a single-object operator "+" to an equal sign expression closely related to the type may change the result of this expression.

Use the indexed by syntax to forcibly select a specific index for problematic queries. Like the first two titles, if possible, try to avoid using this method, especially in the early stages of development, because it is clear that it is an immature optimization measure.


6.0 conclusion

The SQLite query planner does a great job in this way: select a Quick Algorithm for running SQL statements. This is a fact for the old query planner, especially for the new next generation query planner. This may happen occasionally: due to incomplete information, the query planner selects a slightly poor query plan. Compared with the old query planner, the use of the Next Generation query planner is less likely, but it is still possible. Even in this rare case, what application developers need to do is to understand and help the query planner to do the right thing. Generally, the next generation query planner only makes a new enhancement to SQLite, which can make the application run faster, in addition, developers do not need to do more thinking or actions.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.