Powerful new T-SQL syntax greatly improves SQL Server's programmability

Source: Internet
Author: User
Tags bulk insert
Released on: 11/15/2004 | updated on: 11/15/2004

Itzik Ben-gan

This article is based on Beta 1 of Microsoft SQL Server code "Yukon". All the information contained here may be changed.

Download the code in this article: tsqlinyukon.exe (117kb)

Note:This article was written before the product was put into production. Therefore, we cannot guarantee that any details contained here are exactly the same as those found in the delivered products. The information in this article describes the product at the time of release and is only for planning. This information can be changed at any time without prior notice.

SummaryThe T-SQL language in the coming SQL Server version will provide more powerful functionality and flexibility than earlier versions. The added and enhanced features include the use of try/catch constructs for error handling, Snapshot isolation, and waitfor enhancements. In addition, it is important to include the bulk row set provider, common table expressions, recursive queries, limit and unlimit operators, and so on. This article will introduce these features so that readers can prepare for the next version of SQL Server.

Content on this page
Error Handling
Snapshot isolation
Waitfor command
Bulk row set provider
Top options
Apply Operator
Common table expressions and recursive queries
Triggers and notifications

The next SQL Server version-beta 1, codenamed "Yukon", introduces many enhancements and new features to T-SQL to improve your presentation, error management level, and performance. In this article, I will discuss some important feature enhancements and new features, including error processing, recursive queries, and isolation. I will focus on error management and related improvements, and briefly describe other enhancements.

In addition to the features described here, T-SQL has many other important feature enhancements I have not discussed in depth, because some of them are not new in concept, while others need to be discussed separately. These include enhanced functions of the Message Processing and Service proxy platforms, partitions, and XML. (For more information about XML feature enhancements in Yukon, see "XML in Yukon: New Version showcases Native XML type and advanced data handling" written by Bob beauchemin in this article .)

Error Handling

SQL Server Yukon Beta 1 introduces a new try/catch structure for error handling in the T-SQL. This structure can be used to capture transaction STOP errors, or even errors (Conversion errors, deadlocks, etc.) that may cause batch stop in SQL Server versions earlier than SQL Server ). The types of errors that cannot be processed by the new constructor are those that will cause session suspension (usually errors with a severity of 21 and higher, such as hardware errors ). Generally, your error handling code 1 is shown in.

The xact_abort setting is enabled, so that SQL server can treat any errors as transaction STOP errors so that they can be captured and processed. In the try block, any errors that occur in an explicit transaction will pass the control to the Catch Block that follows the try block. If no error occurs, skip the Catch Block. If you want to learn the type of the error and respond accordingly, you must save the return value of @ error to a variable located at the beginning of the catch block, and then start the study. Otherwise, the value returned by @ error may be incorrect because any statement except declare can change it.

When the transaction abort error occurs in the transaction located in the try block and the control is passed to the Catch Block, the transaction enters the doomed state of failure. Before you explicitly issue a rollback command, the lock will not be released and the stored continuously cannot be reversed. Before rollback is sent, you are not allowed to start any operation that requires opening an implicit or explicit transaction. You can check the content of the resource that has been changed in the transaction that caused the error, so that you can see what has changed, but you must issue a rollback to take the remedial action that requires the transaction to occur. Note that to catch errors in a catch block, you must write code in the nested try/catch structure. To look at a more detailed example, we first create an errorlog table (where the error handling code should review the comments), then create tables t1 and t2 to query them, as shown in the code in Figure 2.

Next, run the script in Figure 3 (called Script 1) in the new connection (called connection 1 ). Script 1 sets the lock timeout to 30 seconds and the deadlock priority to low, in this case, it voluntarily becomes the victim of a deadlock in a process running at a normal priority. The code in the try block updated T1. wait for 10 seconds and select T2. If the transaction is completed without errors, a row is inserted in the errorlog table, and a comment indicates that the transaction is completed successfully.

The Catch Block is designed to capture primary key conflict errors, lock timeout setting expiration, and retry logic deadlock errors. You can reset the number of retries by changing the value assigned to the variable @ retry at the beginning of the Code. The value is now set to 2.

After running the code in Figure 3 for the first time, view the errorlog content. Note that the transaction is successfully completed. To test whether a primary key conflict error occurs, open a new connection (called connection 2) and run the following code:

Go back to connection 1 and run script 1 again. If you check the contents of ErrorLog, you should see a primary key conflict error recorded in it. Go to connection 2 and delete the row just inserted by running the following command:

To test whether the lock timeout expires, run the following code in connection 2:

  UPDATE T1 SET col1 = 1
Go back to connection 1 and run script 1 again. After about 30 seconds, an error should occur. Looking at the contents of ErrorLog, you can find that the timeout setting for recording a lock expires. Go to connection 2 and issue a ROLLBACK command to roll back the transaction.

To test for a deadlock, go to connection 2 and paste the following code, but it will not work for now:

  SET @i = 1
  WHILE @i <= 2
    UPDATE T2 SET col1 = 2
    WAITFOR DELAY '00: 00: 10 '
  WAITFOR DELAY '00: 00: 05 '
  SET @i = @i + 1
Go to connection 1, run the code in script 1, and then immediately run the code in connection 2. After about a minute, you will see an error in connection 1. Looking at the contents of ErrorLog, you can notice that two retry attempts were made after the deadlock error. The third attempt was successful and no error occurred. Query the ErrorLog table and view its contents.

Finally, if you want to raise your own transaction abort error within the TRY block, you can call the RAISERROR command with the TRAN_ABORT option.

Back to top

SNAPSHOT isolation

Yukon introduced a new isolation level, called SNAPSHOT, which allows you to use the following mode: the writer does not hinder the reader, and the reader is provided with the submitted version of the data they requested. SQL Server Yukon maintains a linked list in tempdb, which is responsible for tracking row changes and constructing an older version of the submitted data for the reader. This isolation is useful for open locks, where UPDATE conflicts are not common. If process 1 retrieves the data and later attempts to modify it, if process 2 also modifies the same data between process 1 retrieval and modification, then SQL Server will generate a one when process 1 attempts to modify it because of a conflict error. Then, process 1 can try to reissue the transaction. This mode is very efficient when update conflicts are not common.

In order to work in SNAPSHOT isolation level mode, you must turn on the database option ALLOW_SNAPSHOT_ISOLATION, you will see this later. To simulate a scenario where the writer does not hinder the reader, create a testdb database, open the corresponding database option, and create a T1 table with the value "Version1" in the datacol column by running the following code:

USE testdb

  datacol VARCHAR (10) NOT NULL
INSERT INTO T1 VALUES (1, 'Version1')
Issue the following code from connection 1, which will open a transaction and change the value in datacol to "Version2":

USE testdb
  UPDATE T1 SET datacol = 'Version2' WHERE keycol = 1
Go to connection 2 and run the following code, which will set the isolation level of the session to SNAPSHOT and retrieve the contents of T1:

USE testdb
Please note that even if the connection 2 changes the value to "Version2" (but no changes have been submitted yet), "Version1" is retrieved.

Now go to connection 1 and commit the transaction, then close all connections with a COMMIT command. To try open locking, open two new connections, go to connection 1 and run the following code, it will set the isolation level of the session to SNAPSHOT, open a transaction, and retrieve data from T1:

Go to connection 2 and issue an UPDATE command:

UPDATE T1 SET datacol = 'Version3' WHERE keycol = 1
Go back to connection 1 and try to update the same data retrieved earlier. It has been modified by connection 2:

UPDATE T1 SET datacol = 'Version4' WHERE keycol = 1
You will get an error informing you that SQL Server cannot use snapshot isolation to access table T1 in the database testdb, and you should retry the transaction.

Back to top

WAITFOR command

The WAITFOR command in Yukon has been enhanced in many ways. In addition to waiting for a specified duration or waiting for a datetime value, you can now request to wait for a T-SQL statement that affects at least one row. You can specify a command to wait for one of the following statements: SELECT, INSERT, UPDATE, DELETE, or RECEIVE. The first four need no explanation; RECEIVE refers to receiving a message from the queue. If you want to stop waiting after the specified number of milliseconds, you can optionally specify a timeout setting. The syntax of the WAITFOR command is as follows:

WAITFOR (<statement>) [, TIMEOUT <timeout_value>]
Another enhancement to T-SQL in Yukon allows you to return output from statements in the Data Manipulation Language (DML) instead of SELECT (INSERT, UPDATE, DELETE). A new OUTPUT clause allows you to request new and old images of columns returned by referencing INSERTED and DELETED tables, similar to how they are referenced in triggers. You can even specify an INTO clause and import the output into a table variable. Another feature enhancement allows you to specify the READPAST prompt by modifying the statement, you can skip the locked line.

An example of using the aforementioned functional enhancement is to have several processes wait for a DELETE statement to delete at least one row from the table, import the output into a table variable, and each process processes different parts of the data in parallel. To see this situation, create the following MsgQueue table:

USE tempdb
  msgdata VARCHAR (15) NOT NULL
Open one or more connections and run the following code in each connection to periodically insert new messages in the table:

USE tempdb

WHILE 1 = 1
  INSERT INTO MsgQueue VALUES ('Msg' +
    CAST (CAST (RAND () * 1000000000 AS INT) AS VARCHAR (10)))
  WAITFOR DELAY '00: 00: 01 '
Next, you need to open a few other new connections and run the code in Figure 4 in each connection to simulate the processing of newly arrived messages.

Back to top

BULK rowset provider

Yukon introduced a new BULK rowset provider, which allows you to specify in the OPENROWSET function to efficiently access files in the form of relationships. You can use the BULK provider in a manner similar to using the BULK INSERT statement, but you do not need to send the output to the table. You must specify a format file, which is the same as the format file when using bcp.exe or the BULK INSERT statement. The following code illustrates how to use the format file c: /temp/textfile1.fmt to access a file called c: /temp/textfile1.txt, which provides an alias C for the result table and aliases col1, col2, and col3 for the result column :

SELECT col1, col2, col3
FROM OPENROWSET (BULK 'c: /temp/textfile1.txt',
       FORMATFILE = 'c: /temp/textfile1.fmt') AS C (col1, col2, col3)
In addition to the FORMATFILE option, you can specify the following options in parentheses in the OPENROWSET function: CODEPAGE, DATAFILETYPE, FIELDTERMINATOR, FIRSTROW, LASTROW, and ROWTERMINATOR. You can also use INSERT SELECT to efficiently load data into a table and you can specify table hints for load options:

  SELECT col1, col2, col3
  FROM OPENROWSET (BULK 'c: /temp/textfile1.txt',
         FORMATFILE = 'c: /temp/textfile1.fmt') AS C (col1, col2, col3)

Using the BULK rowset provider, you can load a file into a table column more easily than before using regular DML. Now, for large objects, you are no longer limited to TEXT, NTEXT and IMAGE data types, you can also use VARCHAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX) data types. The new MAX option allows you to manipulate large objects in the same way as regular data types. For example, the following UPDATE statement stores a text file in a table column defined as VARCHAR (MAX):

  SET clob_col = (SELECT clob_data
                   FROM OPENROWSET (BULK 'c: /temp/textfile1.txt',
                          SINGLE_CLOB) AS C (clob_data))
WHERE keycol = 1
The SINGLE_NCLOB option tells SQL Server that large objects are in character format. Similarly, SINGLE_CLOB specifies a large object in regular character format, while SINGLE_BLOB specifies a binary format. The name of the returned column is BulkColumn, but as explained in the previous code snippet, you can give it your own alias.

Back to top

TOP option

There are two significant enhancements to the T-SQL TOP option in Yukon. Now you can specify an expression as a parameter of TOP, the expression can contain variables or even independent queries. You can also use the TOP option with improved DML (INSERT, UPDATE, DELETE).

In order to specify an expression, it must be enclosed in parentheses. When the PERCENT option is not used, the expression should be of the BIGINT data type; when the PERCENT option is used, it should be a floating point value ranging from 0 to 100. The following code illustrates how to use an expression with a variable to return the AdventureWorks database by the requested amount The earliest order in SalesOrderHeader:

USE AdventureWorks

SET @n = 5

FROM SalesOrderHeader AS SOH
ORDER BY OrderDate, SalesOrderID
SalesOrderID is used as an additional key. Similarly, the following example illustrates how to use the PERCENT option to return the oldest order by the requested percentage:

SET @p = 0.01

FROM SalesOrderHeader AS SOH
ORDER BY OrderDate, SalesOrderID
You should get 4 rows of results because the SalesOrderHeader table contains 31,519 rows, and 31,519 x .0001 is equal to 4 after rounding.

The main motivation for enabling TOP through improved DML is to replace the SET ROWCOUNT option, which SQL Server cannot optimize well. The SET ROWCOUNT option is frequently modified to batch process a large number of rows, to prevent the transaction log from being full, and to avoid the promotion of individual locks to full table locks. To learn how to use the new TOP function to delete rows in batches, first copy the contents of the SalesOrderHeader table to MySalesOrderHeader, and run the following code to create indexes on the OrderDate and SalesOrderID columns:

INTO MySalesOrderHeader
FROM SalesOrderHeader

  ON MySalesOrderHeader (OrderDate, SalesOrderID)
To delete all lines with order year earlier than 2003 in batches of 1,000, use the following code:

WHILE 1 = 1
  DELETE TOP (1000)
  FROM MySalesOrderHeader WHERE OrderDate <'20030101'

  IF @@ rowcount <1000 BREAK
SQL Server optimizes such code much more efficiently than using the SET ROWCOUNT option. Now, you can eliminate the MySalesOrderHeader table:

DROP TABLE MySalesOrderHeader
Back to top

APPLY operator

APPLY is a new relational operator specified in the FROM clause of a query. It allows you to call a table-valued function on each row of an external table, optionally using the columns of the external table as parameters to the function. The APPLY operator has two forms: CROSS APPLY and OUTER APPLY. If the table-valued function returns an empty collection for it, the former does not return the rows of the external table, while the latter returns a NULL value row instead of the function column. To use the APPLY operator, first create the following Arrays table, which stores multiple comma-separated arrays of values:

  array VARCHAR (7999) NOT NULL

INSERT INTO Arrays VALUES ('20, 40,30 ')
INSERT INTO Arrays VALUES ('-1, -3, -5')
Next, create the fn_splitarr table-valued function, which takes an array as a parameter and returns a table containing multiple individual elements and their positions (see Figure 5). To test this function, run the following code:

SELECT * FROM fn_splitarr ('20, 40,30 ')
The output should look like the following line:

pos value
--- -----
1 20
2 40
3 30
Now use the CROSS APPLY operator to call the function for each row in the Arrays:

SELECT A.arrid, F. *
FROM Arrays AS A
  CROSS APPLY fn_splitarr (array) AS F
Then check the value in your output against the following line:

arrid pos value
----- --- -----
2 1 10
3 1 20
3 2 40
3 3 30
4 1 -1
4 2 -3
4 3 -5
Note that the row with arrid 1 in Arrays is not returned because the function returns an empty collection for it. To return all rows from Arrays, use OUTER APPLY (don't worry about whether the function returns rows for them).

Supporting the APPLY operator brings an additional feature, that is, you can now refer to table-valued functions and specify columns of external tables as parameters in subqueries. For example, the following code returns Arrays whose sum of all elements is less than or equal to 10:

FROM Arrays
WHERE (SELECT SUM (value) FROM fn_splitarr (array)) <= 10
Back to top

Common table expressions and recursive queries

Common table expressions (CTE) allow you to write named table expressions that are only stored continuously during a query. Their simple form provides a mix of views and derived tables. Similar to views, CTE can be referenced multiple times in external queries, and like derived tables, it is only stored continuously during the query. In a more complex form, you can write recursive CTEs, making it easier and more efficient to manipulate trees and graphs.

When defining a CTE, use a WITH clause followed by the name of the CTE, and optionally provide a list of aliases for the resulting column in parentheses. It is followed by an AS clause and parentheses containing CTE query expressions. Finally, provide an external query that references the CTE results. Within the CTE query expression, you can refer to variables as you wish.

The code in Figure 6 gives a simple example, write a non-recursive CTE to return the annual sales order value of the customer. Obviously, you can get the same result without using CTE. But imagine this: What if you want each row to return the total value of the previous year and the difference from the current year. If you choose to use a derived table, you must specify the query for the current year in one derived table and the query for the previous year in the other, and join the two with an external query. With CTE, you can write a query that returns the total value of each year and reference it twice with an external query (see Figure 7).

But the real power of CTEs is their recursive form. Within the parentheses of CTE, you can define independent or back-reference CTE queries. Independent queries (those that do not reference the CTE name) are called fixed members and can only be called once. Queries that refer back to the CTE name are called recursive members and can be called repeatedly until the query no longer returns rows. Fixed members can use UNION or UNION ALL operators to append to each other, depending on whether they are willing to eliminate duplicates. Recursive members must be appended using the UNION ALL operator.

For an example scenario that illustrates the purpose of recursive CTE, consider the BillOfMaterials table in the AdventureWorks database. This table represents a typical bill of materials in which the assembly of the product forms an acyclic directed graph. Each product is assembled with other products, and other products are assembled with other products, so there is no circular relationship. The product relationships contained in this assembled product are represented by the AssemblyID and ComponentID columns. PerAssemblyQty contains the number of component products (denoted by ComponentID) for each product represented by AssemblyID. The outdated relationship specifies a date in the ObsoleteDate column. If you are only interested in non-stale data, you should test whether this column is NULL. There are other useful information in the table, including the unit of measure, but for the purposes of our intention, all other columns can be ignored.

The code in Figure 8 generates the exploded view data for ProductID 210. Figure 9 shows part of this view; it describes the containment relationship between products. Within the body of the CTE, the first query does not refer to the name of the CTE, so it is a fixed member and can only be called once. Note that the query will find the row with component ID 210 and assembly ID NULL, which means it is a top-level product. The query ensures that this relationship is not outdated and returns the component ID and quantity. The recursive member returns the products contained in the assembly (returned from the previous step by joining between the name of the CTE and the BillOfMaterials table). When the recursive member is called for the first time, the previous step is the result returned by the fixed member. In the second call, the previous step is the result of the first call to the recursive member, and so on, until the recursive member returns an empty collection.

The recursive member calculates the cumulative number of components by multiplying the number of the previous step by the number of components. The external query refers to the name of the CTE to obtain a unified result of all calls to fixed members and recursive members. The external query joins the CTE with the Products table to obtain the product name and generates the 90 rows in Figure 10 (with truncation). Each component can appear multiple times in the output, such as product 835, because it can participate in different assemblies. You can modify the external query to group the results by product ID and name to obtain the total number of each product. Code 8 is shown, and the external query is as follows:

SELECT B. ProductID, P.Name,
   SUM (B.Qty) AS TotalQty
  JOIN Product AS P
    ON P.ProductID = B.ProductID
GROUP BY B. ProductID, P.Name
ORDER BY B. ProductID;
If you suspect there is a loop and want to limit the number of recursive calls, you can specify the MAXRECURSION option immediately after the external query:

WITH ...
This option will cause SQL Server to raise an error when the CTE exceeds the specified limit. If this option is not specified, the default value in SQL Server is 100. If you do not want to be restricted, you must specify 0. Please note that you can write custom code to detect circular relationships, but this is beyond the scope of this article.

Back to top

PIVOT operator

The new PIVOT operator in SQL Server Yukon allows you to write cross-tab queries to turn rows into columns. The UNPIVOT operator is just the opposite — processing rotated data, turning columns into rows. Figure 11 shows the result of using the PIVOT operator in the database when you want to return the annual total sales order value for each salesperson, and the annual value is displayed in different columns.

The important thing to note when using the PIVOT operator is that you need to provide it with a query expression that uses views, derived tables, or CTE to return only the columns of interest. The reason is that behind the scenes, PIVOT actually performs an implicit GROUP BY operation on all columns that are not explicitly referenced by the operator. Here, all you need is the salesperson ID, order year, and order value:

USE AdventureWorks

  YEAR (SOH.OrderDate) AS OrderYear,
  SOD.OrderQty * SOD.UnitPrice AS OrderValue
FROM SalesOrderHeader AS SOH
  JOIN SalesOrderDetail AS SOD
    ON SOD.SalesOrderID = SOH.SalesOrderID
SQL Server will understand that the list of "GROUP BY" columns should Is a list of columns in the input table that are not explicitly referenced by the aggregate function or the PIVOT operator in the IN clause. So if you don't want to get the columns that are not needed in the implicit GROUP BY column list, you need to provide the PIVOT operator with an input table that contains only the columns of interest for the aggregate function, IN clause, and implicit GROUP BY. This can be achieved by using a CTE or a derived table (including previous queries that return only the columns of interest).

The code in Figure 12 illustrates how to use this query within the CTE and have the external query issue a PIVOT operation on the result of the CTE. SUM (OrderValue) tells PIVOT which aggregation should be calculated for the cell to fill the rotated column. The FOR clause tells PIVOT which source column contains the value rotated into the result column. The IN clause contains a list of values to be displayed as the name of the result column.

SQL Server requires that you explicitly specify the list of values to be rotated into the result column in the IN clause. You cannot use static queries while letting SQL Server find all the different values in OrderYear. To achieve this, the query string must be constructed dynamically using dynamic execution, as shown in the code in 13.

To see the effect of the UNPIVOT operator, first create the SalesPivoted table. This is achieved by running the query in Figure 12 and adding "SELECT INTO SalesPivoted" before the FROM clause (see Figure 14). The parameters of the UNPIVOT operator are very similar to those of PIVOT. However, you need to specify the name of the result column. The result column will contain the values of all rotated cells in one column. After the FOR clause, specify the name of the result column, which stores the name of the rotated column as the column value. In parentheses after the IN clause, specify the list of rotated columns that you want to cancel rotation:

FROM SalesPivoted
  UNPIVOT (OrderValue
    FOR OrderYear IN ([2001], [2002], [2003], [2004])) AS U
UNPIVOT does not return rows for cells that contain NULL values. In order to clear the extra tables and indexes I created in the database, run the following code:

DROP INDEX SalesOrderHeader.idx_nc_OrderDate
DROP TABLE SalesPivoted
Back to top

Triggers and notifications

SQL Server Yukon Beta 1 introduces support for Data Definition Language (DDL) triggers, allowing you to capture and react to DDL operations, and optionally roll back operations. Multiple DDL triggers work synchronously, immediately after the trigger event, similar to how triggers work in previous versions of SQL Server. SQL Server also supports an asynchronous event usage mechanism that can use notifications, allowing you to subscribe to get notifications when certain events occur.

The following triggers are created at the database level and can capture DROP TABLE attempts:

RAISERROR ('Not allowed to drop tables.', 10, 1)

-For debug
PRINT 'DROP TABLE attempt in database' + DB_NAME () + '.'
PRINT EventData ()
You can define triggers to trigger specific DDL events, such as CREATE_TABLE, DROP_TABLE, ALTER_TABLE, CREATE_VIEW, etc., or if you want triggers to trigger all DDL events in the database, you can also specify DDL_DATABASE_LEVEL_EVENTS. Within the trigger, you can call the EventData function to return information about the process and operation that triggered the trigger. You can study the XML returned by the function and react accordingly.

To test the trigger, first create the table TestDrop and insert a row in it by running the following code:

Next, try to drop the table:

The DROP attempt was caught and a message was output indicating that the table was not allowed to be dropped. In addition, the return value of the EventData function is output in XML format for debugging purposes. (Actually, you can view the XML data in the trigger. It contains a lot of useful information from which you can determine what kind of operation best suits your needs. For example, you can prevent certain tables from being removed at certain times of the day .) Trigger rollback operation, so that the table will not be removed from the database. To remove the trigger, you need to issue the following code statement:

DROP TRIGGER prevent_drop_table ON DATABASE
You can also create a trigger to capture server-level events. For example, the following triggers capture login operation events, such as creating, changing, or removing a login:

PRINT 'DDL LOGIN took place.'
PRINT EventData ()
The trigger here simply outputs a notification indicating that the event occurred and contains the details of the event. But of course you can study the details of the incident and respond accordingly. To test the trigger, run the following code and view the results:

The code identifies the DDL login event, and the event data is generated in XML format. If you want, you can view event data and review information that feels more important.

If you want to remove the trigger, run the following code:

Back to top


T-SQL feature enhancements and new features for Yukon allow you to manipulate data more efficiently, develop applications more easily, and improve your error handling capabilities. When dealing with data operations, T-SQL is still the best development choice in SQL Server, and now you have a richer development environment. To make it easier for you to experience these new feature sets, all the examples described in this article can be downloaded from the link at the beginning of this article.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.