Detailed SQLite Query Planner _ database Other

Source: Internet
Author: User
Tags create index joins natural logarithm sqlite sqlite database sqlite query tagname uuid


1.0 Introduction



The task of the Query planner is to find the best algorithm or to say "query plan" to complete an SQL statement. As early as SQLite version 3.8.0, the component of the query planner has been rewritten so that it can run faster and generate better query plans. This rewrite is called the "next-generation query planner" or "NGQP".



This article ngqp the importance of query planning, puts forward some problems inherent in query planning, and outlines how to solve these problems.



We know that the NGQP (Next Generation query planner) is almost always better than the older version of the query planner. However, perhaps some applications have unwittingly relied on some uncertain or not-so-good features in the older version of the query planner, and then upgrade the Query Planner update to NGQP, which may cause the program to flash back. NGQP must consider this risk and provide a series of checks to reduce risk and resolve problems that may arise.



Focus on this document on NGOP. For a more general SQLite query Planner and an overview of the entire history of SQLite, see: "SQLite query optimizer overview."
2.0 Background



There is usually an obvious optimal algorithm choice for querying a single table with a simple number of indices. However, for larger and more complex queries, such as multiple links between many indices and subqueries, there may be hundreds of, thousands of, or millions of reasonable algorithms for computing results. The job of this query planning is to select a single "best" query plan with numerous possibilities.



The Query Planner is what makes the SQL database engine so amazingly useful and powerful. (This is true of all SQL database engines, not just sqlite.) The query planner frees programmers from choosing a specific query plan from a chore. This allows programmers to focus more on mental energy on higher-level application issues and more value to end users. For simple queries, the choice of the query plan is obvious, albeit convenient, but not very important. But as an application, architecture and query will become more and more complicated. A smart query plan can greatly accelerate and simplify the work of application development. It tells the database engine what content requirements are surprisingly powerful, and then lets the database engine find the best way to retrieve that content.



Writing a good query planner is more art than science. The query planner must have incomplete information. It cannot decide how long it will take any particular plan, but it does not actually have to run the plan. Therefore, when comparing two or more plans, find out which are "best", the query planner makes some assumptions and guesses, those assumptions and guesses sometimes go wrong. A good query plan requires that the correct solution be found, which is rarely considered by programmers.
Query Planner in 2.1 sqlite



SQLite calculations use nested loops joins, each of the marked connections in a loop (an extra loop may insert the in and or operators in the where sentence.) SQLite thinks that's too much to think about, but for the sake of simplicity, we can ignore it in this article. In each cycle, one or more indices are used and are accelerated to search, or a loop may be "full table scan" reading each row of the table. Therefore, the query plan is decomposed into two subtasks:


    • The nesting order of the various loops being picked.
    • Select a good index for Each loop.


Picking the nesting order is generally a more challenging problem.



Once the nesting sequence of connections is established, the choice of each loop index is usually obvious.



2.2 SQLite Query Planner Stability Assurance



For any SQL statements given, SQLite usually chooses the same query plan if:


    • There is no obvious change to the schema of the database, such as adding or deleting indexes (indices),
    • Analyze command does not return
    • SQLite does not use SQLITE_ENABLE_STAT3 or sqlite_enable_stat4 at compile time, and
    • Use the same version of SQLite


The SQLite stability guarantee means that you run the query efficiently in the test and your application does not change the schema, then SQLite will not suddenly choose to start using a different query plan, which may cause performance problems after you post your application to the user. If your application is working in the lab, it can also work after deployment.



Enterprise-Class client/server SQL databases are generally not guaranteed to do so. In the client/server SQL database engine, the server tracks the size of the tables and the number of indexes (indices), and the query planner chooses the optimal plan based on these statistics. Once the contents of the database are changed by adding or deleting, the change in the statistic information may cause the query Planner to use different query plans for certain queries. Often, new planning is better for changed data structures. But sometimes new query planning can lead to performance degradation. When using the client/server database engine, there is usually a database administrator (DBA) to handle these rare problems. But DBAs cannot fix the problem in an embedded database like SQLite, so SQLite needs to be careful to ensure that the query plan is not accidentally changed after deployment.



The SQLite stability guarantee is suitable for traditional query planning and NGQP.



It is important to note that changes in the SQLite version may cause changes to the query plan. The same version of SQLite usually chooses the same query plan, but if you reconnect your application to a different version of the SQLite, the query plan may change. In rare cases, a change in the SQLite version can cause performance degradation. This is the reason why you should consider statically connecting your application to SQLite instead of using a system-wide (System-wide) SQLite shared library because it may change without your knowledge or control.
3.0 a tricky situation



"Tpc-h Q8" is a test query from the transaction Processing Performance Council. The Query Planner does not choose a good plan for tpc-h Q8 in 3.7.17 and previous versions of SQLite. and is determined to adjust the traditional query planner can not fix this problem. In order to find a good solution for tpc-h Q8 queries and to continually improve the quality of the SQLite query planner, it is necessary to redesign the query planner. This section will explain why redesign is necessary, ngqp what is different and try to solve tpc-h Q8 problems.



3.1 Query Details



Tpc-h Q8 is a 8-way join. Based on what you see above, the main task of the Query planner is to determine the best nesting order for these eight cycles, thus minimizing the amount of work done to complete the join operation. The following figure is a simple model for the Tpc-h Q8 example:






In this diagram, the 8 tables in the FROM clause in the query statement are represented as a large circle and are identified by the name of the FROM clause: N2, S, L, P, O, C, N1 and R. The arcs in the diagram represent the estimated overhead for an outer join of the table that calculates the beginning of the arc. For example, the cost of connecting L within S is 9.17 of the 2.30,s connection L.



The "resource consumption" here is calculated by logarithmic operation. Because loops are nested, the total resource consumption is multiplied, not added. It is generally assumed that the graph is weighted by the cumulative weight, but the figure here shows the value of the various resource consumption logarithms. The figure above shows that s is less than 6.87 in the interior of the L loop, and the query that is the S loop inside the L cycle is about 963 times times faster than the queries in the S loop outside the L loop.



The arrows that start with a small circle marked "*" represent the resources that are consumed by running each loop individually. The external circulation must consume the resources that "*" consumes. The inner loop can choose to consume the resources consumed by the "*" or select one of the remaining items to be consumed by the external loop, whichever is the lowest resource consumption. You can think of the resources consumed by the "*" as a shorthand for any number of arcs in other nodes in the diagram to the current node. So the graph is "complete", which means that each pair of nodes in the graph has two arcs (some are obvious and some are implied).



The question of finding the best query plan is equivalent to finding the smallest consumption path that accesses each node in the graph only once.



(Note: The assessment of resource consumption in the Tpc-h Q8 graph is computed by the query planner in SQLite 3.7.16 and is converted using natural logarithm.) )



3.2 Complexity



The Query planner problem shown above is a simplified version. The consumption of resources can be estimated. Only after we actually run the loop can we know what the real resource consumption is running this cycle. SQLite estimates the resource consumption of a running loop based on the constraints of the WHERE clause and the indexes that can be used. Such estimates are usually sorta, but sometimes the estimated results are divorced from reality. Using the Analyze command to collect additional statistics for a database can sometimes allow SQLite to assess the resources consumed more accurately.



The resource consumed is made up of several numbers, not just a single number, as shown above. SQLite calculates several different estimates of the consumed resources for each cycle at different stages. For example, the "initialization" resource consumption occurs only when the query starts. Initializing a resource that is consumed is the resource that is consumed by automatically indexing a table without an index. This is followed by the resources that are consumed by each step of the running loop. Finally, the number of rows that are automatically generated by the evaluation loop is the information necessary to evaluate the resources consumed by the inner loop. If the query contains an ORDER BY clause, the resources consumed by the sort are also considered.



Dependencies in commonly used queries are not necessarily on a separate loop, so dependent models may not be represented in graphs. For example, one of the constraints of a WHERE clause may be s.a=l.b+p.c, implying that the S-loop must be an inner loop of L and P. Such a dependency cannot be represented in a graph because there is no way to draw an arc that starts at the same time from two or more than two nodes.



If the query contains an ORDER BY clause or a GROUP BY clause, or if the query uses the DISTINCT keyword, then the row is automatically sorted to form a graph, which is especially handy if you choose to traverse the path of the diagram, so you do not need to sort separately. The automatic deletion of an ORDER BY clause can make a significant difference in performance, so it is a factor to consider when completing the complete implementation of the planner.



In tpc-h Q8 queries, all initialization resource consumption is trivial, each node is dependent before, and there is no order by,group by or DISTINCT clause. Thus, for Tpc-h Q8, the figure above is sufficient to represent what is needed to compute resource consumption. The usual query may involve many other complex situations, and in order to be able to clearly illustrate the problem, the remainder of this article ignores many of the factors that complicate the problem.



3.3 Find the best query plan



Before version 3.8.0, SQLite always used "nearest neighbor" or "NN" heuristics to find the best query plan. The nn heuristics perform a separate traversal of the diagram, always choosing which arc is the least consumed as the next step. In most cases, the NN heuristics run very well. Moreover, the NN heuristics are also very fast, so sqlite can quickly find good planning even if they reach 64 connections. Conversely, other database engines that can run a larger number of searches will stop without moving the number of tables in the same connection greater than 10 or 15 o'clock.



Unfortunately, the nn heuristics are not optimal for the query plan computed by Tpc-h Q8. The programming calculated by NN heuristics is r-n1-n2-s-c-o-l-p, and its resource consumption is 36.92. The previous sentence means that the R table runs in the outermost loop, the N1 is located in the immediate inner loop, the N2 is located in the third loop, and so on to P, which is in the inner loop. The shortest path to traverse this graph (available from exhaustive search) is p-l-o-c-n1-r-s-n2, at which point the resource cost is 27.38. The differences don't seem to be big, but remember that the resources consumed are computed by logarithmic calculations, so the shortest path is almost 750 times times faster than the path from NN heuristics.



One way to solve this problem is to change the SQLite to get the best path using exhaustive searches. However, the time required for exhaustive search is proportional to K! (k is the number of tables involved in the connection), so when there are more than 10 connections, the time spent running Sqlite3_prepare () is greatly lost.
3.4 N Recent neighbor heuristics or "N3" heuristics



The next-generation query planner uses new heuristics to find the best path to traverse diagrams: "N Nearest neighbor Heuristics" (later called "N3"). With N3, each step is not just to select a nearest neighbor, N3 algorithm in each step to track n the best path, where n is a small integer.



Assuming n=4, then for the tpc-h Q8 diagram, the first step is to find four shortest paths that can access any single node:
R (cost:3.56)
N1 (cost:5.52)
N2 (cost:5.52)
P (cost:7.71)



The second step is to find the four shortest paths that begin with one of the four shortest paths found in the previous step to access two nodes. In this case, two or more than two paths are possible (such paths have the same set of accessible nodes, possibly in different order), as long as you remember that you have to keep the first step and the minimum resource consumption path. We found the following path:

R-N1 (cost:7.03)
R-N2 (cost:9.08)
N2-N1 (cost:11.04)
r-p {cost:11.27}



The third step is to find four shortest paths that can access three nodes, starting with a two-node four shortest path:

R-N1-N2 (cost:12.55)
R-n1-c (cost:13.43)
R-n1-p (cost:14.74)
R-n2-s (cost:15.08)



Analogy The tpc-h Q8 query has 8 nodes, so the process repeats 8 times altogether. In the case of a normally K connection, the storage requirement complexity is O (N) and the computational time complexity is O (k*n), which is significantly faster than the O (2 K) scenario.



However, which value does n choose? You can try N=k, at which point the complexity of the algorithm is O (K2), which is still very pretty fast, because K has a maximum value of 64, and K rarely exceeds 10. But this is still not enough to solve the tpc-h Q8 problem. If the tpc-h Q8 query is n=8, then the query plan of N3 algorithm is r-n1-c-o-l-s-n2-p, at this time the resource consumption is 29.78. This has greatly improved the NN algorithm, but it is still not the best. When n=10 or even larger, N3 can find the best query plan for tpc-h Q8 queries.



The first implementation of the next generation of query planning for simple query selection n=1, two connections to select N=5, three or more tables to join the selection of n=10. Subsequent releases may want to change the n worth of selection rules.



4.0 risk of upgrading to next-generation query planner



For most applications, upgrading from the old query planner to the next-generation query planner does not require much thought, or it doesn't take much effort to do it. Simply replace the old version of SQLite with a newer version of the SQLite, and then recompile the line, and the application will be much quicker to run. There is no need to make changes or fixes to the API for complex procedures.



However, as with other query replacements, upgrading to the next-generation query planner can actually cause a small risk of performance degradation. This problem occurs not because the next-generation query planner is not correct, or that there are many vulnerabilities, or worse than the old query planner. If the information selected for the index is determined, then the next-generation query planner can always choose a better or better plan than before. The problem is that some applications may use low quality, not many selective indexes, and cannot run analyze. The old query planner looks at many but less likely query implementations for each query, so if you're lucky, such a planner might run into a good plan. On the other hand, the next-generation query planner looks at more query planning implementations, so in theory it might choose another better performance query plan, and if the index works well and performance declines in real time, it could be the distribution of the data.



Technical points:


    • As long as the next-generation query planner can access the exact analyze data in the SQLite STAT1 file, it can always find a query plan that is equivalent to or better performance than the previous query planner.
    • As long as the query pattern does not contain the leftmost field with the same value and has more than 10 or 20 rows of indexes, the next-generation query planner always finds a good query plan.


Not all applications meet the above conditions. Fortunately, even if these conditions are not met, the next-generation query planner will usually still find a good query plan. However, this is also possible (though very little) with performance degradation.



4.1 Example Analysis: Upgrade fossil Use next-generation query regulator



Fossil Dvcs is a version control system used to track all SQLite source code. Fossil Software Warehouse is a SQLite database file. (as a separate exercise, we invite readers to think deeply about this recursive application query planner.) Fossil is both a SQLite version control system and a SQLite test platform. Whenever SQLite is strengthened, fossil is one of the first applications to test and evaluate these reinforcement. So fossil first adopted the next-generation query planner.



Unfortunately, the next-generation query planner has caused fossil performance degradation.



Fossil uses a number of reports, one of which is the schedule of changes made on a single branch that shows all the merges and deletions of this branch. Look at Http://www.sqlite.org/src/timeline?nd&n=200&r=trunk to see a typical example of a time report. Typically, it takes only a few milliseconds to generate such a report. However, after upgrading to the next-generation query planner, we found that it took nearly 10 seconds to generate such a report on the trunk branch of the warehouse.




The core query used to generate the branch schedule is as follows. (We don't expect readers to understand the details of this query.) A description is given below. )


SELECT
   blob.rid AS blobRid,
   uuid AS uuid,
   datetime(event.mtime,'localtime') AS timestamp,
   coalesce(ecomment, comment) AS comment,
   coalesce(euser, user) AS user,
   blob.rid IN leaf AS leaf,
   bgcolor AS bgColor,
   event.type AS eventType,
   (SELECT group_concat(substr(tagname,5), ', ')
    FROM tag, tagxref
    WHERE tagname GLOB 'sym-*'
     AND tag.tagid=tagxref.tagid
     AND tagxref.rid=blob.rid
     AND tagxref.tagtype>0) AS tags,
   tagid AS tagid,
   brief AS brief,
   event.mtime AS mtime
 FROM event CROSS JOIN blob
 WHERE blob.rid=event.objid
  AND (EXISTS(SELECT 1 FROM tagxref
        WHERE tagid=11 AND tagtype>0 AND rid=blob.rid)
    OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid
          WHERE tagid=11 AND tagtype>0 AND pid=blob.rid)
    OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=pid
          WHERE tagid=11 AND tagtype>0 AND cid=blob.rid))
 ORDER BY event.mtime DESC
 LIMIT 200;


This query is not particularly complex, but even so, it can still replace hundreds of lines, perhaps thousands of lines of process code. The main point of this query is to scan the event table down to find the latest 200 submission records that meet any of the following three criteria:


    1. This submission contains the "trunk" label.
    2. This submission has a submit with a "trunk" label.
    3. This submission has a parent submission containing a "trunk" label.


The first condition displays the commits on all backbone branches, and the second and third conditions contain submissions that are merged into the backbone branch or generated by the backbone branch. These three conditions are implemented by connecting three exists statements with or in the WHERE clause of this query. The performance degradation caused by using the next-generation query planner is generated by the second and third conditions. The problems that exist in the two conditions are the same, so we only look at the second condition. A subquery of the second condition can be rewritten as the following statement (simplifying minor and unimportant):


SELECT 1 from
 plink JOIN tagxref on tagxref.rid=plink.cid
 WHERE tagxref.tagid= $trunk and
  plink.pid= $ckid;


The Plink table holds the parent-child relationship between each commit. The Tagxref table maps the label to the commit. As a reference, the relevant parts of the pattern for querying the two tables are shown below:


CREATE TABLE plink (
 pid integer REFERENCES blob,
 cid integer REFERENCES blob
);
CREATE UNIQUE INDEX plink_i1 on Plink (pid,cid);
 
CREATE TABLE tagxref (
 tagid integer REFERENCES tag,
 mtime TIMESTAMP,
 rid INTEGER REFERENCE blob,
 UNIQUE (RID, tagid)
);
CREATE INDEX tagxref_i1 on Tagxref (TagID, mtime);



Only two ways to implement such a query are worth considering. (There may be many other algorithms, of course, but none of them is a competitor to the "best" algorithm.)


    • Find all the child submissions for the submit $ckid, and then test each one to see if there are any child submissions containing $trunk tags
    • Find all submissions that contain $trunk tags, and then test each of these submissions to see if there is a $ckid commit.


Intuitively, we humans think the first algorithm is the best choice. Each submission may have several child submissions (of which one is the most commonly used by us). , and then test each child commit and compute the time to find the $trunk label using logarithmic operations. In fact, algorithm 1 is really faster. However, the next-generation query planner does not use the best choice that people intuitively have. The next generation of query planner must choose a very rare algorithm, the algorithm 2 in mathematics is relatively slightly difficult. This is because: in the absence of other information, the next-generation query planner must assume that PLINK_I1 and tagxref_i1 indexes are of equal quality and equal selectivity. Algorithm 2 uses a field of the TAGXREF_I1 index, plink_i1 Two fields of the index, and algorithm 1 uses only the first field of the two indexes. It is precisely because algorithm 2 uses the index of multiple fields, so the next generation query Planner will correctly determine it as two algorithms with better neutral performance. Two algorithms take a very close time, and algorithm 2 is just barely a bit ahead of algorithm 1. However, in this case, the choice of algorithm 2 is indeed correct.




Unfortunately, in practical applications, algorithm 2 is slower than algorithm 1.



This problem occurs because the index is not of equal quality. A commit may have only one child commit. This way the first field of the PLINK_I1 index typically reduces the value to search for one row. However, since thousands of submissions contain "trunk" tags, the first field of TAGXREF_I1 will not be much help in reducing the search.



The next-generation query planner has no way of knowing that TAGXREF_I1 is of little use in such queries unless it runs analyze on the database. The Analyze command collects quality statistics for each index and stores the statistics in the SQLITE_STAT1 table. If the next-generation query Planner is able to access these statistics, it is very easy to choose algorithm 1 as the best algorithm in large part.




Did the old query planner not choose algorithm 2? Very simple: Because the NN algorithm never even considered algorithm 2. This type of planning problem is illustrated as follows:






In the case of "no running analyze" as shown on the left, the NN algorithm chooses the cyclic p9plink as the outer loop, because 4.9:5.2 is small, the result is to select the P-t path, that is, algorithm 1. The NN algorithm only finds one of the best selection paths at each step, so it completely ignores the fact that 5.2+4.4 is a slightly better plan than 4.9+4.8 performance. However, the N3 algorithm tracked the 5 best paths to the two connections, so it eventually chose the t-p path because the total resource consumption of the path was less. Path T-p is the algorithm 2.



Note: If you run analyze, then the assessment of resource consumption is closer to reality, so both NN and N3 choose Algorithm 1.



(Note: The assessment of resource consumption in the latest two figure is computed by the next-generation query planner using a 2-based logarithmic algorithm, and is slightly different than the hypothetical resource consumption compared to the old query planner.) Therefore, the resource consumption assessment in the last two graphs cannot be compared with the resource consumption assessment in the Tpc-h Q8 diagram. )



4.2 Problem correction



Running analyze on the repository database fixes this type of performance problem immediately. However, whether or not the repository is analyzed, we require that fossil be very strong and always run quickly. For this reason, we modify the query to use the cross join operator instead of the commonly used join operator. SQLite will not reorder the tables of the cross join connection. This feature is a sqlite feature that is designed to allow experienced developers to force SQLite to perform a specific nested loop sequence. Once a connection is changed to a connection such as CROSS join with a keyword, the next-generation query planner enforces a slightly faster algorithm 1, regardless of whether or not to use analyze to collect statistical statistics.



We say the algorithm is 1 "faster", but it is technically inaccurate. Running algorithm 1 for a common storage warehouse is faster, but it is possible to build a repository of resources: each submission to a resource warehouse is committed to a different name-only branch, and all submissions are child submissions of the root commit. In this case, TAGXREF_I1 has more options than PLINK_I1, and the algorithm 2 is really faster. However, in practice such repositories are highly unlikely to occur, so the order of hard-coded nested loops using the cross join syntax is the appropriate solution to the problem in this case.



5.0 list of methods for avoiding or correcting query planner problems



Don't Panic! The query planner chooses poor planning this situation is actually very rare. You may not encounter such a problem in your application. If you don't have a performance problem, then you don't have to worry about it.



Create the correct index. Most SQL performance problems are not caused by query planner problems, but because of the lack of appropriate indexes. Make sure that the index facilitates all large queries. Most performance problems can be solved with one or two create INDEX commands without the need to modify the application code.



Avoid creating low quality indexes. A low quality index (created to resolve query planner problems) is an index in which the leftmost field in the table has the same value for more than 10 rows or 20 rows. In particular, avoid using a Boolean field or an Enum type field as the leftmost field in the index.



The fossil performance problem mentioned in the previous paragraph of this article is that the leftmost segment (tagid field) of the TAGXREF_I1 index of the Tagxref table has the same worth of more than 10,000.



If you must use a low quality index, be sure to run analyze. As long as the query Planner is aware of the low quality of the index, the low quality index will not confuse it. The query planner knows that a low quality index is achieved by SQLITE_STAT1 the contents of the table, which is represented by the Analyze command calculation.



Of course, analyze can run efficiently only if the database has a very large amount of content at the outset. When you want to create a database and accumulate a lot of data, you can run the command "ANALYZE sqlite_master" to create SQLITE_STAT1 tables, and then (using common INSERT statements) to Sqlite_ The STAT1 table is filled with content that explains how such a database is right for your application-perhaps this is what you get when you run a analyze command on a very perfect template database that is filled out by a lab.



Write your own code. An increase can make it quick and easy to know which queries require a lot of time, so that you run only those queries that do not take too long.



If your query might use a low-quality index on a database that does not run the analysis, use the cross join syntax to enforce a specific nested loop order. SQLite special handling of the cross join operator, which forces the left table to be the outer loop of the right table.



If there are other ways to do this, avoid doing so because it contradicts the powerful advantages of any one SQL language idea, especially if the application developer does not need to understand the query plan. If you use a cross join, you will do so until later in the development cycle, and explain carefully in the comments how the cross join is used, so that it can be removed later. Avoid the use of cross joins early in the development cycle because doing so is an immature optimization measure, known as the root of all evils.



Use the single eye operator "+" to cancel certain restrictions on the WHERE clause. When a higher quality index is available for a specific query, if the query planner still insists on selecting the index of the poor quality, use the single eye operator "+" cautiously in the WHERE clause to force the query planner not to use a bad quality index. If possible, add the operator as carefully as possible and avoid using it in the early stages of the application development cycle. Note in particular: adding a single eye operator "+" to an equal sign expression closely related to a type may change the result of this expression.



Use the indexed by syntax to force problematic queries to select specific indexes. As with the previous two headings, if possible, avoid using this method, especially in the early stages of development, because it is clear that it is an immature optimization measure.




6.0 Conclusion



SQLite's query Planner does a good job of doing this: choosing a fast algorithm for a running SQL statement. This is true for the old query planner, even more so for the new next-generation query planner. It may happen that, because the information is incomplete, the query planner chooses a slightly worse query plan. Using the next-generation query planner is less likely to occur than using the old query planner, but it is still possible. Even with this rare occurrence, what the application developer needs to do is understand and help the query planner do the right thing. Typically, the next-generation query planner simply makes a new addition to SQLite, which allows applications to run faster and without the need for developers to do more thinking or action.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.