How SQL Server query optimizer runs

Source: Internet
Author: User
Tags create index getdate sql server query

First, the practical, the misunderstanding of the use of the index

The purpose of the theory is to apply. Although we have just listed when clustered or nonclustered indexes should be used, in practice the above rules are easily overlooked or cannot be analyzed in the light of the actual situation. Below we will be based on the actual problems encountered in the practice of the index used in the misunderstanding, so that you can master the method of index establishment.

1, the primary key is the clustered index

The idea, I think, is an extreme mistake, a waste of a clustered index. Although SQL Server defaults to establishing a clustered index on the primary key.

In general, we will create an ID column in each table to differentiate each piece of data, and this ID column is automatically incremented, and the stride size is typically 1. This is true of the column GID in our example of office automation. At this point, if we set this column as the primary key, SQL Server will think of this Lieme as a clustered index. The benefit is that your data can be physically sorted in the database by ID, but I don't think it makes much sense.

Obviously, the advantage of a clustered index is obvious, and there can be only one rule for a clustered index in each table, which makes the clustered index more valuable.

From the definition of the clustered index we've talked about, we can see that the biggest benefit of using a clustered index is the ability to quickly narrow the query based on query requirements and avoid full table scans. In practice, because the ID number is automatically generated, we do not know the ID number of each record, so it is difficult to use the ID number to query. This makes the ID number the primary key as a clustered index a waste of resources. Second, a field that has a different ID number as a clustered index does not conform to the "Aggregate index should not be established" rule for a "large number of different values"; Of course, this situation is only for the user to modify the record content, especially when the index entry is negative, but for the query speed does not affect.

In the office automation system, whether it is the System home page display needs the user to sign the document, the meeting or the user carries on the file query and so on any circumstance to carry on the data inquiry to be inseparable from the field is "the date" and the user's own "user name".

Typically, the home page of office automation displays files or meetings that each user has not yet signed up for. Although our where statement can only limit the current user has not yet signed the case, but if your system has been established for a long time, and the amount of data is large, then every time each user opens the first page of a full table scan, it is not meaningful to do so, The vast majority of users have browsed through the files 1 months ago, which can only increase the cost of the database. In fact, we can allow users to open the system first page, the database only query the user for nearly 3 months not to read the file, through the "date" this field to limit the table scan, improve query speed. If your office automation system has been established for 2 years, then your homepage display speed will theoretically be 8 times times faster than the original speed.

The word "theoretically" is mentioned here because if your clustered index is still blindly built on the primary key of the ID, your query speed is not so high, even if you set the index (non-aggregated index) on the "Date" field. Let's take a look at the speed performance of various queries in the case of 10 million data volumes (data in 3 months is 250,000):

(1) The clustered index is established only on the primary key, and the time period is not divided:

1.Select Gid,fariqi,neibuyonghu,title from Tgongwen

Spents: 128470 milliseconds (i.e.: 128 seconds)

(2) Set up a clustered index on the primary key and a nonclustered index on Fariq:

1.select Gid,fariqi,neibuyonghu,title from Tgongwen

2.where fariqi> DateAdd (Day,-90,getdate ())

Spents: 53763 milliseconds (54 seconds)

(3) Set up the aggregation index on the date column (Fariqi):

1.select Gid,fariqi,neibuyonghu,title from Tgongwen

2.where fariqi> DateAdd (Day,-90,getdate ())

Spents: 2423 milliseconds (2 seconds)

Although each statement extracts 250,000 data, the differences in the various cases are enormous, especially when the clustered index is set in the Date column. In fact, if your database really has 10 million capacity, the primary key is built on the ID column, as in the 1th and 2 cases above, the performance on the Web page is timed out and cannot be displayed at all. This is also one of the most important factors that I discard the ID column as a clustered index. The method for the above speed is: Before each SELECT statement, add:

1.declare @d datetime

2.set @d=getdate ()

and add it after the SELECT statement:

1.select [Statement execution takes time (ms)]=datediff (Ms,@d,getdate ())

2, as long as the index can significantly improve the query speed

In fact, we can see that in the example above, the 2nd and 3 statements are identical, and the indexed fields are the same; only the non-aggregated indexes that were established on the Fariqi field, the latter set up in the aggregate index on this field, but the query speed is vastly different. Therefore, not simply indexing on any field can improve query speed.

From the statement in the table, we can see that there are 5,003 different records for the Fariqi field in the table with 10 million data. It is more appropriate to establish an aggregate index on this field. In reality, we send a few documents every day, these documents are issued in the same date, which is fully in line with the requirements of the establishment of a clustered index: "Neither the vast majority of the same, but not only a very few of the same" rule. As a result, it is important for us to build an "appropriate" aggregate index to improve query speed.

3. Add all fields that need to increase query speed to the clustered index to improve query speed

As already mentioned above: in the data query can not be separated from the field is the "date" and the user's own "user name." Since both of these fields are so important, we can merge them together to create a composite index (compound index).

Many people think that as long as you add any field to the clustered index, you can improve the query speed, and some people are puzzled: if the composite clustered index field is queried separately, then the query speed will slow? With this problem, let's look at the following query speed (the result set is 250,000 data): (the date column Fariqi first in the composite clustered index starting column, the user name Neibuyonghu in the following column):

1. (1) Select Gid,fariqi,neibuyonghu,title from Tgongwen where fariqi> ' 2004-5-5 '

Query speed: 2513 ms

1. (2) Select Gid,fariqi,neibuyonghu,title from Tgongwen where fariqi> ' 2004-5-5 ' and neibuyonghu= ' office '

Query speed: 2516 ms

1. (3) Select Gid,fariqi,neibuyonghu,title from Tgongwen where neibuyonghu= ' office '

Query speed: 60280 ms

From the above experiment, we can see that if you use only the starting column of the clustered index as the query condition and the query speed of all columns that are used in the composite clustered index at the same time, it is even faster than using all of the composite index columns (in the same case as the number of query result sets) This index has no effect if only the non-starting column of the composite clustered index is used as the query condition. Of course, the query speed of statements 1, 2 is the same as the number of entries queried, if all the columns of the composite index are used, and the query results are small, so that will form an "index overlay", thus the performance can be achieved optimally. Also, keep in mind that no matter if you use other columns of the aggregated index frequently, the leading columns must be the most frequently used columns.

Second, no other books on the use of Index experience summary

1. Using aggregate index is faster than primary key with not aggregate index

Here is the instance statement: (all extracts 250,000 data)

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi= ' 2004-9-16 '

Usage Time: 3326 ms

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where gid<=250000

Usage Time: 4470 ms

Here, the aggregate index is nearly 1/4 faster than the primary key speed that is not an aggregated index.

2, using the aggregate index than the general primary key for the order by when the speed, especially in the case of small data volume

1.select Gid,fariqi,neibuyonghu,reader,title from Tgongwen order by Fariqi

Spents: 12936

1.select Gid,fariqi,neibuyonghu,reader,title from Tgongwen order by GID

Spents: 18843

Here, it is 3/10 faster to use the aggregate index than the general primary key for order by. In fact, if the amount of data is very small, it is much faster to use the clustered index as the rank sequence than the non-clustered index, and if the data volume is large, such as more than 100,000, the speed difference between the two is not obvious.

3. Using the time period within the aggregation index, the search time is reduced proportionally to the percentage of the data in the data table, regardless of how many of the aggregated indexes are used:

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi> ' 2004-1-1 '

Spents: 6343 milliseconds (extract 1 million)

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi> ' 2004-6-6 '

Spents: 3170 milliseconds (extract 500,000)

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi= ' 2004-9-16 '

Time: 3326 milliseconds (identical to the result of the previous sentence.) If the number of acquisitions is the same, then the greater than and equals sign are the same)

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi> ' 2004-1-1 ' and fariqi< ' 2004-6-6 '

Spents: 3280 milliseconds

4. The date column will not slow down the query speed because there is a minute or seconds input

In the following example, there are 1 million data, 500,000 data after January 1, 2004, but only two different dates, the date is accurate to the day, before the data 500,000, there are 5,000 different dates, the date is accurate to the second.

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi> "2004-1-1" ORDER by Fariqi

Spents: 6390 milliseconds

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi< "2004-1-1" ORDER by Fariqi

Spents: 6453 milliseconds

Third, improve SQL statements

Many people don't know how SQL statements are executed in SQL Server, and they worry that the SQL statements they write will be misunderstood by SQL Server. Like what:

1.select * FROM table1 where name= ' Zhangsan ' and TID > 10000 and execute SELECT * FROM table1 where TID > 10000 and name= ' Z Hangsan "

Some people do not know whether the execution efficiency of the above two statements is the same, because if it is simple from the statement, the two statements are indeed different, if the TID is an aggregate index, then the last sentence only from the table of 10,000 records after the row , and the previous sentence to look at the whole table to see a few name= ' Zhangsan ', and then based on the constraints of conditions tid>10000 to propose the results of the query.

In fact, such worries are unnecessary. There is a query analysis optimizer in SQL Server that calculates the search criteria in the WHERE clause and determines which index narrows the search space for table scans, which means that it can be automatically optimized.

Although the query optimizer can automate query optimization based on the WHERE clause, it is still necessary to understand how the query optimizer works, if not, and sometimes the query optimizer does not query quickly as you intended.

During the query analysis phase, the query optimizer looks at each stage of the query and decides whether it is useful to limit the amount of data that needs to be scanned. If a stage can be used as a scanning parameter (SARG), then it is called an optimization, and the index can be used to quickly obtain the required data.

Sarg definition: Used to limit the search to an operation, because it usually refers to a specific match, a worthy range of matching or more than two conditions and connection. The form is as follows:

Column name operators < constants or variables > or < constants or variables > operator column names

Column names can appear on one side of the operator, while constants or variables appear on the other side of the operator. Such as:

Name= ' Zhang San '

Price >5000

5000< Price

Name= ' Zhang San ' and price >5000

If an expression does not meet the form of sarg, it cannot limit the scope of the search, which means that SQL Server must determine for each row whether it satisfies all the conditions in the WHERE clause. So an index is useless for an expression that does not satisfy the Sarg form.

After the introduction of Sarg, we will summarize the experience of using SARG and the conclusions of certain materials encountered in practice:

1. Whether a like statement belongs to Sarg depends on the type of wildcard you are using

such as: Name like ' Zhang% ', which belongs to Sarg

And: Name like '% Zhang ', does not belong to Sarg.

The reason is that the wildcard% is opened in the string so that the index is unusable.

2, or will cause a full table scan

Name= ' Zhang San ' and price >5000 symbol SARG, while: Name= ' Zhang San ' or price >5000 does not conform to SARG. Using or causes a full table scan.

3. Non-operator, function-induced statements that do not satisfy the Sarg form

The most typical case of a statement that does not satisfy the Sarg form is a statement that includes non-operators, such as not,! =, <>,!<,!>, not-EXISTS, not-in, not-like, and also functions. Here are a few examples that do not satisfy the Sarg form:

ABS (Price) <5000

Name like '% three '

Some expressions, such as:

WHERE Price *2>5000

SQL Server will also assume that Sarg,sql server will convert this type to:

WHERE Price >2500/2

However, we do not recommend this, because sometimes SQL Server does not guarantee that this conversion is completely equivalent to the original expression.

4, in the role of equivalent and OR

Statement:

SELECT * FROM table1 where tid in (2,3) and select * FROM table1 where tid=2 or tid=3

Is the same, it will cause a full table scan, and if there is an index on the TID, its index will be invalidated.

5, try to use less

6, exists and in execution efficiency is the same

Much of the data shows that exists is more efficient than in, and should be used instead of not exists as much as possible. But in fact, I experimented with it and found that both the implementation efficiency is the same, both in front and without. Because of the subquery involved, we experimented with the pubs database that comes with SQL Server. We can open the statistics I/O State of SQL Server before running:

1. (1) Select Title,price from the titles where title_id in (select title_id from sales where qty>30)

The result of this sentence is:

Table ' Sales '. Scan Count 18, logic read 56 times, physical read 0 times, pre-read 0 times.

Table ' titles '. Scan count 1, logic read 2 times, physical read 0 times, pre-read 0 times.

1. (2) Select Title,price from the titles where exists (select * from sales where sales.title_id=titles.title_id and qty>30)

The result of the second sentence is:

Table ' Sales '. Scan Count 18, logic read 56 times, physical read 0 times, pre-read 0 times.

Table ' titles '. Scan count 1, logic read 2 times, physical read 0 times, pre-read 0 times.

From this we can see that the efficiency of execution is the same with exists and in.

7, like execution efficiency with the function charindex () and the preceding wildcard character%

Earlier, we talked about the fact that if you precede the like with a wildcard, it will cause a full table scan, so its execution is inefficient. However, some data show that the use of function charindex () instead of like speed will have a large increase, after I tried to find that this explanation is also wrong:

1.select Gid,title,fariqi,reader from Tgongwen where CHARINDEX (' Forensic detachment ', reader) >0 and fariqi> ' 2004-5-5 '

Spents: 7 seconds, plus: Scan count 4, logic read 7,155 times, physical read 0 times, pre-read 0 times.

1.select Gid,title,fariqi,reader from Tgongwen where reader like '% ' + ' forensic detachment ' + '% ' and fariqi> ' ' 2004-5-5 '

Spents: 7 seconds, plus: Scan count 4, logic read 7,155 times, physical read 0 times, pre-read 0 times.

8, the Union is not absolutely more efficient than or execution

We've talked about using or in the WHERE clause to cause a full table scan, generally, the data I've seen is recommended to use Union instead of or. It turns out that this argument is applicable to most of them.

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi= ' 2004-9-16 ' or gid>9990000

Spents: 68 seconds. Scan count 1, logic read 404,008 times, physical read 283 times, pre-read 392,163 times.

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi= ' 2004-9-16 '

2.union

3.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where gid>9990000

Spents: 9 seconds. Scan Count 8, logic read 67,489 times, physical read 216 times, pre-read 7,499 times.

It seems that the Union in general is more efficient than using or.

But after the experiment, I found that if the query column on or both sides is the same, then the Union and with or the execution speed is much worse, although here the Union scan is the index, and or scan the full table.

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi= ' 2004-9-16 ' or fariqi= ' 2004-2-5 '

Spents: 6423 milliseconds. Scan count 2, logic read 14,726 times, physical read 1 times, pre-read 7,176 times.

1.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi= ' 2004-9-16 '

2.union

3.select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi= ' 2004-2-5 '

Spents: 11640 milliseconds. Scan Count 8, logic read 14,806 times, physical read 108 times, pre-read 1144 times.

9, the field extraction to follow the "how much, how much" principle, avoid "select *"

Let's do an experiment:

1.select top 10000 gid,fariqi,reader,title from Tgongwen ORDER by gid Desc

Spents: 4673 milliseconds

1.select top 10000 gid,fariqi,title from Tgongwen ORDER by gid Desc

Spents: 1376 milliseconds

1.select top 10000 Gid,fariqi from Tgongwen ORDER by gid Desc

Spents: 80 milliseconds

As a result, each time we extract a single field, the data extraction speed will be correspondingly improved. The speed of ascension depends on the size of the field you discard.

10, COUNT (*) is not slower than count (field)

Some of the information says that using * will count all columns, which is obviously less efficient than a world listing. This argument is in fact unfounded. Let's see:

1.select Count (*) from Tgongwen

Spents: 1500 milliseconds

1.select count (GID) from Tgongwen

Spents: 1483 milliseconds

1.select count (Fariqi) from Tgongwen

Spents: 3140 milliseconds

1.select count (title) from Tgongwen

Spents: 52050 milliseconds

As can be seen from the above, if the speed of count (*) and COUNT (primary key) is equivalent, and count (*) is faster than any other field except the primary key, and the longer the field, the faster the rollup. I think, if you use COUNT (*), SQL Server may automatically find the smallest field to summarize. Of course, if you write the count (primary key) directly, it will come more directly.

11, order by clustered index column to sort the most efficient

Let's see: (GID is the primary key, Fariqi is the Aggregate index column):

1.select top 10000 gid,fariqi,reader,title from Tgongwen

Spents: 196 milliseconds. Scan count 1, logic read 289 times, physical read 1 times, pre-read 1527 times.

1.select top 10000 gid,fariqi,reader,title from Tgongwen ORDER by GID ASC

Spents: 4720 milliseconds. Scan count 1, logic read 41,956 times, physical read 0 times, pre-read 1287 times.

1.select top 10000 gid,fariqi,reader,title from Tgongwen ORDER by gid Desc

Spents: 4736 milliseconds. Scan count 1, logic read 55,350 times, physical read 10 times, pre-read 775 times.

1.select top 10000 gid,fariqi,reader,title from Tgongwen ORDER by Fariqi ASC

Spents: 173 milliseconds. Scan count 1, logic read 290 times, physical read 0 times, pre-read 0 times.

1.select top 10000 gid,fariqi,reader,title from Tgongwen ORDER BY Fariqi Desc

Spents: 156 milliseconds. Scan count 1, logic read 289 times, physical read 0 times, pre-read 0 times.

As we can see from the above, the speed of unordered and the number of logical reads are equivalent to the "ORDER by clustered index column", but these are much faster than the "ORDER by nonclustered index column" query speed.

At the same time, in order to sort by a field, whether it is a positive or reverse order, the speed is basically equivalent.

12. Efficient Top

In fact, when querying and extracting very large data sets, the biggest factor that affects database response time is not the data lookup, but the physical i/0 operation. Such as:

1.select Top * FROM (

2.select top 10000 gid,fariqi,title from Tgongwen

3.where neibuyonghu= ' office '

4.order by gid Desc) as a

5.order by GID ASC

This statement, in theory, the execution time of the whole statement should be longer than the execution time of the clause, but the opposite is true. Because the clause executes after 10,000 records are returned, and the entire statement returns only 10 statements, the most important factor that affects the database response time is physical I/O operations. One of the most effective ways to limit physical I/O operations here is to use the top keyword. The top keyword is a system-optimized word in SQL Server that extracts previous or previous percentage data. Through the application of the author in practice, it is found that top is very useful, and the efficiency is very high. But this word is not in another large database Oracle, which is not a pity, although it can be solved in Oracle with other methods (such as: RowNumber). In a later discussion about the "paging display stored procedure for TENS data", we will use the keyword top.

Iv. how the query optimizer operates

The preferred process to go through when we throw a T-SQL statement to SQL Server ready for execution is the compilation process, and if the statement was previously executed in SQL Server, it detects the existence of a compiled execution plan that has been cached for reuse.

However, the process of compiling the compilation requires a series of optimization processes, which are broadly divided into two phases:

1. First, SQL Server performs some simplification of the T-SQL statements we write, usually by querying itself to find interactivity and the order in which the operations are rescheduled.

In this process, SQL Server focuses on statement-writing adjustments, without too much consideration for cost or analysis of index availability, and the most important goal is to produce a valid query.

SQL Server then loads the metadata, including the index's statistics, into the second phase.

2, at this stage is SQL Server a complex optimization process, this phase of SQL Server will be based on the execution plan operators formed in the previous phase to evaluate and try, and even reorganize the execution plan, so the relative optimization process is a time-consuming process.

Use the following flowchart to understand the process:

This diagram looks a bit complicated, we have to analyze it in detail, is to divide this optimization phase into 3 sub-stages

<1> This phase only considers the serial plan, also says the single processor to run, if this phase finds a good serial plan, the optimizer will not enter the next stage. Therefore, for the case of low data volume, or simple execution of the statement, the basic use of the serial plan.

Of course, if the cost of this phase is relatively large, then it will go to the 2nd stage and then optimize it.

<2> This phase first optimizes the 1th phase of the serial plan and then, if the environment supports parallelization, Parallelize, compares, and then optimizes the cost if the lower is the output execution plan, and if the cost is relatively high, go to the 2nd stage, and then continue to optimize.

<3> actually arrives at this stage is the final phase of optimization, this phase will be the 2nd phase of the serial and parallel comparison results for the final step optimization, if the serial execution is good then further optimization, of course, if the parallel execution is good, then continue to parallel optimization.

In fact, the 3rd stage is the frustration of the query optimizer, when reaching the 3rd stage is a remediation phase, only the final optimization, the optimization is not good can only follow the execution plan.

So what are the principles of optimization at each stage in the above process:

The most important principle of these optimizer is: as far as possible to reduce the scanning range, whether it is a table or index, of course, the index is better than the table, the amount of the index is less the better, the ideal case is only one or a few.

Therefore, SQL Server also respects the above principles, has been around this principle to optimize.

(1), screening conditions analysis

The so-called filter condition is actually the condition behind the where statement in the T-SQL statement that we write, and we use the statements in this to minimize the scope of the data scan, which SQL Server optimizes by using these statements.

The general format is as follows:

Column operator <constant or variable>

Or

<constant or variable> operator column

Operator in this format include: =, >, <, = =, <=, between, like

For example: Name= ' Liudehua ', price>4000, 4000<price, name like ' liu% ', name= ' Liudehua ' and price >1000

The above statements are the most commonly used in the statements we write, and this method will be used by SQL Server to reduce the scan, and these columns are indexed, that will try to take the index to get the value, but SQL Server is not omnipotent, and some of the wording it is not recognized, It's also something we want to avoid by writing statements:

A, where name like '%liu ' can not be recognized by the SQL Server optimizer, so it can only be performed by a full table scan or an index scan.

B, name= ' Liudehua ' OR price >1000, this is also invalid, because it can not use two filter conditions to gradually reduce the scanning.

C, price+4>100 This same is not recognized

D, name not in (' Liudehua ', ' Zhourunfa '), and of course, similar: not, not as

As an example:

Select CustomerID from Orderswhere customerid= ' vinet ' SELECT CustomerID from Orderswhere UPPER (CustomerID) = ' vinet '

So the above-mentioned way to write the sentence should try to avoid, or take a flexible way to achieve.

(2), index optimization

After the determination of the filter range above, SQL Server immediately starts the selection of the index, the first thing to determine is whether the filter field has an index entry, that is, whether the index is overwritten.

Of course, if the query item is the best to cover the index, if it is not overwritten by the index, then in order to take full advantage of the characteristics of the index, the bookmark lookup (bookmark) section is introduced.

So, in view of this, when we create the index, the reference property value is the column of the filter condition.

About the choice of using index optimization:

CREATE INDEX employeesname on Employees (firstname,lastname) INCLUDE (HireDate) with (Online=on) goselect FirstName, Lastname,hiredate,employeeid from Employeeswhere firstname= ' Anne '

Of course not. As long as the query column has an index overlay to perform an index lookup, depending on how much content is scanned, the extent of the index is also dependent on how much content is fetched

To give an example:

CREATE INDEX Nameindex on  person.contact (firstname,lastname) goselect * from Person.contactwhere FirstName like ' K% ' SELECT * from Person.contactwhere FirstName like ' Y% ' GO

Exactly the same query statement to look at the execution plan:

Exactly the same query statement, the resulting query plan is completely different, one is the index scan, and the other is an efficient index lookup.

Here I only tell you: FirstName like ' K% ' has 1255 lines, whereas FirstName like ' Y% ' has only 37 lines, of which

In fact, the reason for this is that the statistical information is in mischief.

Therefore, a specific T-SQL statement does not necessarily generate a specific query plan, the same particular query plan is not necessarily the optimal way, the impact of its many factors: about the index, about the hardware, about the contents of the table, about the statistical information and many other factors.

How SQL Server query optimizer runs

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.