SQL statement optimization improves database performance _mssql

Source: Internet
Author: User
Tags commit joins mathematical functions rollback table name advantage

The performance of the system is not ideal, except in part because the application load does exceed the actual processing capacity of the server, more because the system has a large number of SQL statements need to be optimized. In order to achieve stable execution performance, the simpler the SQL statement, the better. For complex SQL statements, try to simplify them.

The common simplification rules are as follows:

1 do not have more than 5 table connections (join)
2 consider using temporary tables or table variables to store intermediate results
3) less use of subqueries
4 View nesting should not be too deep, the general view nesting should not be more than 2 suitable

First, the question of the proposed

In the early stage of the application system development, because the development database data is few, to query the SQL statement, the complex view compilation and so on cannot realize the SQL statement various writing performance good or bad, but if the application system submits the actual application, as the data in the database increases, The response speed of the system becomes one of the most important problems that the system needs to solve at present. A very important aspect of system optimization is the optimization of SQL statements. For massive data, poor SQL statements and high-quality SQL statements between the speed difference can be hundreds of times, visible for a system is not simply to achieve its function can be, but to write high-quality SQL statements, improve system availability.

In most cases, Oracle uses indexes to traverse tables more quickly, and the optimizer mainly improves performance based on defined indexes. However, if the SQL code written in the WHERE clause of the SQL statement is unreasonable, it causes the optimizer to delete the index and use a full table scan, which is generally the so-called poor SQL statement. When writing SQL statements, we should be aware of the principles by which the optimizer deletes indexes, which helps to write high-performance SQL statements.

Two, the SQL statement writes notes the question

The following is a detailed description of the issues that need to be noted in writing the WHERE clause of some SQL statements. In these where clauses, even if some columns are indexed, the system cannot use the index while running the SQL statement because of poor SQL writing, which results in a significant decrease in response speed.

1. Operator optimization

(a) in operator

The advantage of using in-write SQL is that it is easier to write and understand, which is more suited to the style of modern software development. However, in-SQL performance is always relatively low, the steps performed from Oracle analyze the following differences in SQL with no in sql:

If the conversion is unsuccessful, Oracle attempts to convert it into a multiple-table connection by executing a subquery in inside, querying the outer table record, and querying the connection of multiple tables directly if the conversion succeeds. This shows that in the SQL at least one more conversion process. Normal SQL can be converted successfully, but for SQL that contains packet statistics, it cannot be converted.

Recommendation: In a business-intensive SQL, try not to use the in operator, instead of using the EXISTS scheme.

(b) Not in operator

This operation is not recommended for strong columns because it does not apply the index of the table.

Recommended scheme: Replace with not EXISTS scheme

(c) is null or is not NULL operation (determines whether the field is empty)

Determining whether a field is empty generally does not apply the index because the index is not indexed to null values. cannot be indexed with NULL, and any column that contains null values will not be included in the index. Even if there are multiple columns in the index, the column is excluded from the index as long as one of the columns contains null. This means that if a column has a null value, even indexing the column does not improve performance. Any statement optimizer that uses is null or is not NULL in the WHERE clause is not allowed to use the index.

Recommended scenario: Replace with other operational operations of the same function, such as: A is not null changed to a>0 or a> ', and so on. The field is not allowed to be empty and a default value is substituted for null values, such as the Status field in the request is not allowed to be null, and the default is the application.

(d) > < operator (greater than or less than operator)

Greater-than or less-than operators are generally not adjusted, because it has an index will use index lookup, but in some cases it can be optimized, such as a table with 1 million records, a numeric field A, 300,000 recorded a=0,30 million records of the a=2,1 million records of A=3. The effect of performing a>2 and a>=3 is very different, because Oracle will first find the record index of 2 and then compare it to the A>2, while Oracle can find the =3 record index directly when a>=3.

(e) Like operator

The LIKE operator can apply wildcard queries, where the wildcard combination can be almost arbitrary, but if poorly used, it can create performance problems, such as the kind '%5400% ' query that does not refer to the index, as the ' x5400% ' refers to the range index.

A practical example: Use the yw_yhjbqk of the business number in the table to query the business number YY_BH like '%5400% ' this condition will produce a full table scan, if the change to yy_bh like ' x5400% ' or yy_bh like ' b5400% ' will benefit With the YY_BH index of two range of queries, the performance must be greatly improved.

Like statement with wildcard character (%):

This is also seen in the example above. The current demand is such that the list of employees should be queried in the name of the person containing the Cliton. You can use the following query SQL statement:

SELECT * FROM employee where last_name like '%cliton% ';

Because the wildcard character (%) appears at the top of the search word, the Oracle system does not use the last_name index. This may not be avoided in many cases, but it must be in the bottom of the mind that using wildcards like this can slow down the query. However, when wildcards appear in other positions in the string, the optimizer can take advantage of the index. The indexes are used in the following query:

SELECT * FROM employee where last_name like ' c% ';

(f) UNION operator

The Union filters out duplicate records after a table link, so the resulting result sets are sorted after the table is connected, the duplicate records are deleted, and the results are returned. Most applications do not produce duplicate records, the most common of which is the process table and the History table Union. Such as:

SELECT * FROM Gc_dfys 
SELECT * Ls_jg_dfys

This SQL is run with the result of two tables first, then sorted by sorting to delete duplicate records, and finally return the result set, if the table data volume may cause the disk to sort.

Recommendation: Use the union ALL operator instead of union because the union all operation simply merges two results and returns.

SELECT * FROM Gc_dfys 
SELECT * from Ls_jg_dfys

(g) Join columns

For columns with joins, the optimizer will not use the index even if the last join value is a static value. Let's take a look at an example, assuming that there is a staff table (employee), which is divided into two columns for the surname and name of a worker (First_Name and Last_Name), and is now inquiring about a worker named Bill Cliton.

The following is an SQL statement that takes a join query:

SELECT * from Employss where first_name| | ' | | last_name = ' Beill Cliton ';

The above statement can be used to find out if there is a bill Cliton this employee, but it should be noted that the system optimizer is not using the index created on last_name. When the following SQL statement is written, the Oracle system can take an index based on last_name.

where first_name = ' Beill ' and last_name = ' Cliton ';

(h) Order BY statement

The order BY statement determines how Oracle will sort the returned query results. The ORDER BY statement has no particular restrictions on the columns to be sorted, or you can add functions to a column (such as joins or additions). Any non-indexed entry or a calculated expression in an order BY statement will slow down the query.

Careful examination of the order BY statement to find non indexed entries or expressions can degrade performance. The solution to this problem is to rewrite the order BY statement to use the index, or you can create another index for the column you are using, and you should absolutely avoid using an expression in the ORDER BY clause.

(i) not

We often use some logical expressions in the WHERE clause when querying, such as greater than, less than, equal to, not equal to, and so on, and you can also use and (with), or or, and not (non). Not can be used to reverse any logical operation symbol. The following is an example of a NOT clause:

where not (status = ' VALID ')

If you want to use not, precede the reversed phrase with parentheses and precede the phrase with the NOT operator. The not operator is contained in another logical operator, which is not equal to the (<>) operator. In other words, if the not word is not explicitly added to the query where clause, not still in the operator, see the following example:

Where status <> ' INVALID ';

For this query, you can override not to use not:

SELECT * FROM employee where salary<3000 or salary>3000;

Although the results of both queries are the same, the second query scenario is faster than the first query scenario. The second query allows Oracle to use indexes on salary columns, while the first query cannot use indexes.

2. Effects of SQL writing

(a) The same function has the same performance different writing SQL effect.

such as a SQL in a programmer written for Select * from ZL_YHJBQK

B programmer writes for Select * from dlyx.zl_yhjbqk (prefix with table owner)

The C programmer writes for the Select * from Dlyx. ZLYHJBQK (uppercase table name)

The D programmer writes for the Select * from Dlyx. ZLYHJBQK (more space in the middle)

The above four SQL results are the same as the execution time after Oracle profiling, but the principle of the Oracle Shared memory SGA allows Oracle to analyze it once for each SQL and consume shared memory. If you write the SQL string and the format exactly the same, Oracle will only analyze once, shared memory will only leave the analysis results, not only can reduce the time to analyze SQL, but also can reduce the share of memory duplication of information, Oracle can accurately statistics SQL execution frequency.

(b) The conditional order effects in the back of the Where

The conditional order after the WHERE clause has a direct effect on queries to large data scales. Such as:

SELECT * from zl_yhjbqk where dy_dj = ' 1KV below ' and xh_bz=1 
select * from Zl_yhjbqk where xh_bz=1 and dy_dj = ' 1KV below '

The above two SQL DY_DJ (voltage level) and XH_BZ (PIN user flag) Two fields are not indexed, so the execution is all table scan, the first SQL DY_DJ = ' 1KV below ' condition in the recordset ratio is 99%, and the xh_bz=1 ratio is only 0.5%, In the first SQL when the 99% records are DY_DJ and xh_bz comparison, and in the second SQL when the 0.5% records are DY_DJ and xh_bz comparison, so that the second SQL CPU occupancy rate is significantly lower than the first one.

(c) Impact of order of query tables

The order of the lists in the table following from will have an impact on SQL performance, and Oracle will link the table in the order in which it appears if there are no indexes and the Oracle does not have a statistical analysis of the table, so that the table is not in the wrong order and produces a data crossover that is very consumable. (Note: If a statistical analysis of the table, Oracle will automatically advanced small table links, and then a large table link)

3. Use of SQL statement index

(a) Some optimizations for conditional fields

Fields that are processed by functions cannot take advantage of indexes, such as:

substr (hbs_bh,1,4) = ' 5400 ', optimizing processing: Hbs_bh like ' 5400% ' trunc (SK_RQ)
=trunc (sysdate), Optimizing Processing: Sk_rq>=trunc (sysdate) and Sk_rq<trunc (sysdate+1)

Fields that have an explicit or implicit operation cannot be indexed, such as: ss_df+20>50, optimizing Processing: ss_df>30

' X ' | | Hbs_bh> ' X5400021452 ', optimized for processing:hbs_bh> ' 5400021542 '
Sk_rq+5=sysdate, optimizing Processing: sk_rq=sysdate-5
hbs_bh=5401002554, Optimizing Processing: hbs_bh= ' 5401002554 ', note: This condition implicitly to_number the conversion to HBS_BH because the Hbs_bh field is a character type.

A condition that includes multiple field operations for this table cannot be indexed, such as:

YS_DF>CX_DF, unable to optimize
QC_BH | | Kh_bh= ' 5400250000 ', Optimizing Processing: qc_bh= ' 5400 ' and kh_bh= ' 250000 '

4. More aspects of SQL optimization data sharing

(1) Select the most efficient table name order (valid only in the Rule-based optimizer):

The ORACLE parser processes the table names in the FROM clause in Right-to-left order, the last table (driving table), which is written in the FROM clause, is processed first, and in the case where multiple tables are included in the FROM clause, you must select the table with the least number of records as the underlying table. If you have more than 3 table join queries, you need to select the Crosstab table (intersection table) as the underlying table, which is the table referenced by the other tables.

(2) The order of joins in the WHERE clause:

Oracle parses the WHERE clause in a bottom-up order, according to which the connection between the tables must be written before the other where conditions, and the conditions that can filter out the maximum number of records must be written at the end of the WHERE clause.

(3) Avoid the use of ' * ' in the SELECT clause:

Oracle converts ' * ' to all column names in the parsing process, which is done by querying the data dictionary, which means more time will be spent.

(4) Reduce the number of accesses to the database:

Oracle does a lot of work internally: Parsing SQL statements, estimating index utilization, binding variables, reading chunks, and more.

(5) Reset the arraysize parameters in Sql*plus, Sql*forms, and pro*c to increase the amount of retrieved data per database access and the recommended value of 200.

(6) Use the Decode function to reduce processing time:

Use the Decode function to avoid repeatedly scanning the same record or repeating the same table.

(7) Simple integration, no associated database access:

If you have a few simple database query statements, you can integrate them into a single query (even if there is no relationship between them).

(8) Delete duplicate records:

The most efficient way to delete duplicate records (because of the ROWID) example:

DELETE from emp E where E.rowid > (SELECT MIN (x.rowid) from emp X where x.emp_no = E.emp_no).

(9) Replace Delete with truncate:

When you delete records in a table, in general, the rollback segment (rollback segments) is used to hold information that can be recovered. If you do not commit a transaction, Oracle restores the data to the state before it was deleted (accurately, before the deletion was performed) and when the truncate is applied, the rollback segment no longer holds any recoverable information. When the command is run, The data cannot be recovered. Therefore, very few resources are invoked and the execution time is short. (Translator: Truncate only applies in the deletion of the full table, truncate is DDL is not DML).

(10) Use commit as much as possible:

Whenever possible, use a commit in the program as much as possible, so that the performance of the program is increased and the requirements are reduced by the resources released by the commit, and the resources released by the commit are:

A. The information used to recover data on the rollback segment.
B. Locks obtained by program statements
C. Space in the Redo log buffer
D. Oracle to manage the internal costs of the 3 resources mentioned above

(11) Replace the HAVING clause with the WHERE clause:

Avoid the HAVING clause, which will filter the result set only after all records have been retrieved. This processing requires sorting, totals, and so on. If you can limit the number of records through the WHERE clause, you can reduce the overhead. On, where, and having these three clauses that can be conditionally, on is the first execution, where is the second, having the last, because on is the first to filter the records that do not meet the criteria before statistics, it can reduce the intermediate operations to deal with the data, It is supposed to be the fastest, where it should be faster than having, because it filters the data before the sum is used on the two table joins, so there is a table where there is a comparison with having. In the case of this single table query statistic, if the condition to be filtered does not involve the calculation of fields, then their result is the same, where the Rushmore technique is used, and the having is not, the slower the latter is slow if it involves a calculated field, it means that before the calculation, The value of this field is indeterminate, according to the workflow in the previous article, where the action time is done before the calculation, and the having is only after the calculation, so in this case the results will be different. On a multiple table join query, on has an earlier effect than where. First, the system is based on the join conditions between the tables, a number of tables into a temporary table, and then filtered by the where, and then calculated, after the calculation by having to filter. Thus, to filter the conditions to play a correct role, first of all to understand this condition should be the role, and then decide to put there.

(12) Reduce the query to the table:

In SQL statements that contain subqueries, you should pay special attention to reducing the query to the table. Example:

Select Tab_name from the TABLES where (tab_name,db_ver) = (select Tab_name,db_ver from tab_columns where VERSION = 604)

(13) Improve SQL efficiency through internal functions:

Complex SQL often sacrifices execution efficiency. It is very meaningful to know how to solve the problem by using the function of the above method in practical work.

(14) Using table aliases (alias):

When you connect multiple tables in an SQL statement, use the alias of the table and prefix the alias with each column. This allows you to reduce parsing time and reduce syntax errors caused by column ambiguity.

(15) substituting exists instead of in, using not exists instead of in:

In many queries based on the underlying table, it is often necessary to join another table in order to satisfy one condition. In this case, using EXISTS (or not EXISTS) usually increases the efficiency of the query. In a subquery, the NOT IN clause performs an internal sort and merge. In either case, not in is the least efficient (because it performs a full table traversal of the table in the subquery). In order to avoid using not in, we can rewrite it as an outer join (Outer joins) or not EXISTS.


(efficient) SELECT * from EMP (base table) where EMPNO > 0 and EXISTS (select ' X ' from DEPT where DEPT. DEPTNO = EMP. DEPTNO and LOC = ' Melb ') 
(inefficient) SELECT * from EMP (underlying table) WHERE EMPNO > 0 and DEPTNO in (SELECT DEPTNO from DEPT where LO C = ' Melb ')

(16) Identify the ' inefficient execution ' of the SQL statement:

While there are many graphical tools for SQL optimization at the moment, writing your own SQL tools to solve problems is always the best approach:

SELECT executions, disk_reads, Buffer_gets, 
ROUND ((buffer_gets-disk_reads)/buffer_gets,2) Hit_radio, 
ROUND (disk_reads/executions,2) Reads_per_run, 
sql_text from 
WHERE executions>0 and 
buffer_gets > 0 and 
(buffer_ Gets-disk_reads)/buffer_gets < 0.8 order by 

(17) Use Index to improve efficiency:

An index is a conceptual part of a table that is used to improve the efficiency of retrieving data, and Oracle uses a complex, b-tree structure. In general, querying data through an index is faster than full table scans. The Oracle optimizer uses indexes when Oracle finds the best path to execute queries and UPDATE statements. It also increases efficiency when you use indexes to join multiple tables. Another advantage of using the index is that it provides uniqueness validation for the primary key (primary key) ... Those long or long raw data types, you can index almost all columns. In general, using indexes in large tables is particularly effective. Of course, you'll also find that using indexes can also improve efficiency when scanning small tables. Although the use of indexes can improve query efficiency, we must also pay attention to its cost. Indexes require space for storage and regular maintenance, and the index itself is modified whenever a record is added or subtracted from the table or the index column is modified. This means that each record's insert, DELETE, and update will pay 4, 5 more disk I/O. Because indexes require additional storage space and processing, those unnecessary indexes can slow down query response times. Periodic refactoring of indexes is necessary:


(18) Replace distinct with exists:

Avoid using DISTINCT in the SELECT clause when submitting a query that contains a one-to-many table of information, such as a department table and an employee table. It is generally possible to consider replacing with exist, EXISTS make the query faster, because the RDBMS core module returns the result immediately after the condition of the subquery is satisfied. Example:

select DISTINCT dept_no,dept_name from DEPT D, EMP E WHERE d.dept_no = e.dept_no 
Select Dept_no,d Ept_name from DEPT D where EXISTS (SELECT ' X ' from EMP E WHERE e.dept_no = d.dept_no);

SQL statements are capitalized, because Oracle always resolves SQL statements first, and converts lowercase letters to uppercase executions.

(20) in Java code as little as possible with the connector "+" connection string!

(21) To avoid using not on indexed columns, it is common to avoid using not on indexed columns, and not to produce the same effect as using functions on indexed columns. When Oracle "encounters" not, he stops using the index instead of performing a full table scan.

(22) Avoid using calculations on indexed columns

In the WHERE clause, if the indexed column is part of the function. The optimizer uses a full table scan without indexing. Example:

Low efficiency: 
SELECT ... From DEPT WHERE SAL * > 25000; 
SELECT ... From DEPT WHERE SAL > 25000/12;
(23) Replace > efficiently with >=
select * from emp where DEPTNO >=4 
select * from emp where DEPTNO >3

The difference is that the former DBMS will jump directly to the first dept equals 4 and the latter will first navigate to the Deptno=3 record and scan forward to the first dept greater than 3 records.

(24) Replace with union or (applicable to indexed columns)

In general, replacing or in a WHERE clause with union would have a better effect. Using or for an indexed column causes a full table scan. Note that the above rules are valid only for multiple indexed columns. If a column is not indexed, query efficiency may be reduced because you have not selected an OR. In the following example, indexes are built on both loc_id and region.

High efficiency: 
Select loc_id, Loc_desc, REGION from 
WHERE loc_id = 
Select loc_id, loc_d  ESC, REGION from 
SELECT loc_id, Loc_desc, REGION from 

If you insist on using or, you need to return the least recorded index column to the front.

(25) to replace or with in

This is a simple and easy to remember rule, but the actual execution effect also needs to examine, under Oracle8i, the two execution path seems to be the same.

Low efficiency: 
SELECT .... From LOCATION WHERE loc_id = ten or loc_id = or loc_id = 
efficient SELECT ... 
From LOCATION WHERE loc_in in (10,20,30);

(26) Avoid using is null and is not NULL on indexed columns

To avoid using any nullable columns in the index, Oracle will not be able to use the index. For Single-column indexes, this record will not exist in the index if the column contains a null value. For composite indexes, if each column is empty, the record also does not exist in the index. If at least one column is not empty, the record exists in the index. For example: If the uniqueness index is based on the columns A and B of the table, and the a,b value of a record exists in the table (123,null), Oracle will not accept the next record (insert) with the same a,b value (123,null). However, if all the indexed columns are empty, Oracle will assume that the entire key value is empty and empty is not equal to NULL. So you can insert 1000 records with the same key value, of course they're all empty! Because a null value does not exist in an indexed column, a null comparison of an indexed column in a WHERE clause causes Oracle to deactivate the index.

Inefficient: (index invalidated) 
SELECT ... From DEPARTMENT WHERE dept_code are not NULL; 
Efficient: (index valid) 
SELECT ... From DEPARTMENT WHERE Dept_code >=0;

(27) Always use the first column of the index:

If the index is based on multiple columns, the optimizer chooses to use the index only if its first column (leading column) is referenced by the WHERE clause. This is also a simple and important rule that when only the second column of the index is referenced, the optimizer uses a full table scan and ignores the index.

(28) Replace union with Union-all (if possible):

When the SQL statement requires a union of two query result sets, the two result sets are merged in a union-all manner, and then sorted before outputting the final result. If you use UNION ALL instead of union, this sort is not necessary. Efficiency will be improved accordingly. It should be noted that UNION all outputs the same record in the two result sets repeatedly. So you still need to analyze the feasibility of using union all from the business requirements analysis. The UNION will sort the result set, which will use the memory of the Sort_area_size block. The optimization of this block of memory is also very important. The following SQL can be used to query for sorted consumption

SELECT acct_num, Balance_amt from 
WHERE tran_date = ' 31-dec-95 ' 
Select Acct_num, Balance_amt from 
WHERE tran_date = ' 31-dec-95 ' 
Select Acct_ NUM, Balance_amt from 
WHERE tran_date = ' 31-dec-95 ' 
UNION all 
SELECT acct_num, Balance_amt from 
WHERE tran_date = ' 31-dec-95 '

(29) Where to replace order by:

The ORDER BY clause uses the index only under two strict conditions.
All columns in the order by must be contained in the same index and remain in the index.
All columns in the order by must be defined as non-null.
The index used in the WHERE clause and the index used in the ORDER BY clause cannot be tied.

For example:

The table dept contains the following:

Dept_code PK not NULL
Dept_desc not NULL
Dept_type NULL

Inefficient: (index not used) 
select Dept_code from DEPT ORDER by Dept_type 
efficient: (using index) 
select Dept_code from DEPT WHERE dept_ty PE > 0

(30) Avoid changing the types of indexed columns:

Oracle automatically makes simple type conversions to columns when comparing data of different data types.
Suppose Empno is an indexed column of a numeric type.

SELECT ... From EMP WHERE EMPNO = ' 123 '

In fact, after Oracle type conversion, the statement translates to:

SELECT ... From EMP WHERE EMPNO = to_number (' 123 ')

Fortunately, the type conversion did not occur on the index column, and the purpose of the index was not changed.
Now, suppose Emp_type is an indexed column of character type.

SELECT ... From EMP WHERE Emp_type = 123

This statement is converted by Oracle to:

SELECT ... From EMP WHERE to_number (emp_type) =123

This index will not be used because of a type conversion occurring internally! To avoid an implicit type conversion of Oracle to your SQL, it is a good idea to explicitly display the type conversion. Note that when characters and numeric comparisons are compared, Oracle converts numeric types to character types preferentially.


Select Emp_name Form employee where salary > 3000

In this statement, if salary is a float type, the optimizer optimizes it to convert (float,3000), because 3000 is an integer, we should use 3000.0 in programming and not wait for the runtime to convert the DBMS. Conversion of the same character and integer data.

(31) The WHERE clause needs to be careful:

The WHERE clause in some SELECT statements does not use an index. Here are some examples.
In the following example, (1) '!= ' will not use the index. Remember, an index can only tell you what exists in the table, not what doesn't exist in the table. (2) ' ¦¦ ' is a character join function. As with other functions, the index is deactivated. (3) ' + ' is a mathematical function. As with other mathematical functions, the index is deactivated. (4) The same indexed columns cannot be compared to each other, which will enable full table scans.

A. The number of records in a table that retrieves more than 30% of the data. Using indexes will have no significant efficiency gains. B. In certain situations, using an index may be slower than full table scans, but this is the same order of magnitude difference. In general, using an index is a few times or even thousands of times times more than a full table scan!

(33) Avoid using resource-consuming operations:

SQL statements with Distinct,union,minus,intersect,order by will start the SQL engine to perform a resource-intensive sort (sort) function. Distinct requires a sort operation, while the other requires at least two times to perform the sort. Typically, SQL statements with union, minus, and intersect can be overridden in other ways. If your database sort_area_size well, use union, minus, intersect can also be considered, after all, their readability is very strong.

(34) Optimize GROUP by:

Increase the efficiency of the group BY statement by filtering out unwanted records before group by. The following two queries return the same result but the second one is obviously much faster.

GROUP by job has 
job = ' PRESIDENT ' 
OR job = ' MANAGER ' 
high efficiency: 
   select job, AVG (SAL) from 
OR job = ' MANAGER ' 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.