SQL SERVER query performance optimization-analysis of transactions and locks (5)

Source: Internet
Author: User
Tags sql server query microsoft sql server management studio sql server management sql server management studio

SQL SERVER query performance optimization-analysis of transactions and locks (1)

SQL SERVER query performance optimization-analysis of transactions and locks (2)

SQL SERVER query performance optimization-analysis of transactions and locks (III)

 

SQL SERVER query performance optimization-analysis of transactions and locks (4)

 

(4) undetected distributed deadlocks

An application holds database resources and interacts with the user after the transaction is started. An error occurs during the interaction with the user, resulting in a delay in the release of database resources. In the dynamic management view of SQL SERVER 2005/2008, sys. dm_exec_requests provides relevant information. The status field value of this SESSION_ID is "sleeping", and wait_type is "NULL. For SQL 2005, you can view the "activity monitor --" Process Information "view in Microsoft SQL Server Management Studio. The" enable transaction field "of the process shows a non-" 0 "value. For example.

 

 

Run the code in SQL 2005 (2008), that is, SQL SERVER query performance optimization-analyze the "Example 1" in transactions and locks (2), that is, the following code, as shown in.

Select spid process, STATUS, Logon account = SUBSTRING (SUSER_SNAME (sid),), user machine name = SUBSTRING (hostname ), locked = convert (char (3), blocked), database name = SUBSTRING (db_name (dbid),), cmd command, waittype as wait type, last_batch last batch processing time, number of uncommitted transactions in open_tran from master. sys. sysprocesses -- lists the locks of others (values in the blocked field in other processes) but does not lock itself (blocked = 0) Where spid in (select blocked from master. sys. sysprocesses) and blocked = 0

 

 

 

Because the application holds the transaction, and the application does not handle the transaction after an error occurs, there is no resource to wait for, but the transaction is held, similar to the case in the previous (3, however, the SQL PROFILER tool is used for tracking, but no error events can be found.

Recommended Solution

The distributed deadlock caused by applications is difficult to trace and analyze. The program developers need to record the behavior of the application on their own. When there are more users, the system will be delayed and cannot be executed normally. This requires program developers to maintain a good development habit: the later the transaction is started, the better, the fewer resources the better. Once the transaction is started, it will be closed sooner or later, do not have any interaction with the user during transaction execution. The input parameters or content should be completed before starting the transaction, various checks on the relevant data should also be performed before the transaction is started. The transaction should only be enabled when the updated data is inserted into the database. After the updated data is inserted, it will be immediately closed.

 

 

(5) the granularity of locked data is too low or too high

When you set an improper lock granularity, if you set transactions to always use Row lock or table lock, the problem may occur, or when the system resources are used too much, it is easy to cause locking.

Recommended Solution

You can use SQL profiler to observe the SQL statement displayed in the "TextData" field, and check whether the application has set a lock prompt. If you want to temporarily stop the lock prompt, you can use the following statement

Dbcc traceon (8755)

Or use the SQL SERVER activation parameter-T 8755 to stop the lock prompt function. If there is any improvement, you can reconsider the possibility of removing the lock prompt from the application.

 

(6) Compile Blocking

This phenomenon is caused by the compilation and storage process being locked in the master. sys. in the sysprocesses view or sp_lock stored procedure, the content in the field "waiting for resources" is "COMPILE", or a large number of "SP: REComplie" events occur during SQL PROFILER recording. Because re-compilation requires CPU resources, the lock is in a long string of locked connections. The lock time of a single lock is not long, but the entire Link is time-consuming, therefore, the locked person at the end of the link needs to wait for a long time. At the same time, the CPU usage is high.

When a cached data table is used in the stored procedure, the structure of the cached data table needs to be set. To set a primary key or open a cursor using the cached data table, call the stored procedure each time, both require recompilation. Or this stored procedure is a popular stored procedure that is often called when an application is executed, and Compile Blocking may occur.

However, when a stored procedure is used for the first time, compilation is also required. Therefore, it is considered COMPILE Blocking as long as it is waiting for compilation.

Recommended Solution

Execute the statement using sp_executesql, that is, execute the SQL statement using sp_executesql. The SQL statement is not compiled as part of the execution plan of the stored procedure. Therefore, when executing such statements, SQL SERVER can freely use existing statement plans in the cache, or create new execution plans in the execution phase. In either case, the plan to call the stored procedure will not be affected, you do not need to re-compile.

EXECUTE statements have the same effect, but we do not recommend that you use them. It is highly efficient to use EXECUTE without using SP_EXECUTESQL statements because the former does not allow query parameterization.

 

III. Basic Principles:

1. transactions cannot be processed across batches. The shorter the Statement, the better. Do not interact with users during the transaction.

2. Handle situations such as giving up during renewal or executing errors with caution.

3. Create an index correctly. Refer to my previous articles.

(For example, SQL Server query performance optimization-index creation principle (1)

SQL Server query performance optimization-Covering Index (1) and other articles)

4. It is recommended that data tables have clustered indexes, and the key value of clustered indexes should not be too large, because all non-clustered indexes store clustered index keys. Do not use fields that often need to be updated as the key value of the clustered index, because once the clustered index is changed, all non-clustered indexes need to be changed, resulting in a large number of locks. Less indexing affects query efficiency, and more indexing is required. This wastes maintenance resources and reduces the efficiency of adding, modifying, and deleting indexes. Therefore, after creating an index, check whether SQL SERVER uses indexes and delete unnecessary indexes. Do not create indexes for fields with high data density or low query condition authentication rate.

5. Try not to activate Implicit Transaction to prevent it from holding transactions for a long time.

6. Minimize the transaction isolation level

7. Conduct stress tests to understand the degree of lock caused by interactions when a large number of users are involved.

 

4. prevent and handle deadlocks

1. Try to avoid or deal with locks as soon as possible. When there are too many locks and locks, a deadlock may occur.

2. The order of resource access must be the same. For example, if connection A first accesses resource 1 and then Resource 2, and access sequence of Connection B is the opposite, A deadlock may occur. Do not call external programs when starting transactions, which may lead to distributed deadlocks.

3. Use the same lock for different connections. Or the two connections are locked because the same resources are modified. If your system does not impose mandatory requirements on the correctness of the updated data, you can use the sp_getbindtoken and sp_bindsession storage procedures, if the connection is shared and locked, the two connections update data at the same time, which may cause loss of data updates.

Example:

Use Testgocreate proc sp_upd_OPINION @ OPINIONID varchar (20), @ bindToken varchar (255) outputasexec region @ bindToken outputupdate WBK_OPINION set OPINION_VALUE = 'true' where OPINION_ID = @ OPINIONIDgocreate proc region @ OPINIONID varchar (20), @ bindSession varchar (255) outputasexec sp_bindsession @ bindSession update reset set OPINION_VALUE = 'false' where OPINION_ID = @ OPINIONID go ---- run declare @ bindToken varchar (255) begin tranexec 'preentryiduse 'in the first connection ', @ bindToken outputselect * from WBK_OPINIONselect @ trancount -- the number of transactions is select @ bindToken.

 

 

 

---- Run in the second connection --- where @ binToken is obtained after the execution of the first connection, and the obtained begin tranexec sp_upd_opinion2 'preentryiduse ', @ bindToken select * from WBK_OPINIONselect @ trancount -- the number of transactions is

 

 

 

--- Execute the following statement in the third connection. Because it is not in the same transaction, the update WBK_OPINION set OPINION_VALUE = 'true' where OPINION_ID = 'preentryiduse 'rollback tran --- rollback

 

 

1. you can encode the test example based on the above Code and use the three connections of Management studio to execute the update sample code. You will find that the two connections with the same TOKEN will be updated together, the obtained @ TRANCOUNT system variables are the same. Other connections that are not in the same transaction will be locked. @ TRANCOUNT is also irrelevant to the aforementioned transactions.

2. submit different data access paths. If the two SQL statements with different connections cause deadlocks due to the same index, you can consider creating different indexes for different access statements. The index prompt forces the two connections to access their respective indexes. Or two different connections access the same data table. If different indexes are referenced but their access order is staggered to form a deadlock, you can force the two connections to use the same index, to maintain access order.

In any case, you must consider extra performance loss when using this solution, because you Force Index access through the index prompt, the query optimization program cannot use the best index based on data features.

 

5. Handling of deadlocks

SET DEADLOCK_PRIORITY LOW to automatically discard unimportant transactions and add error handling for deadlocks to the business logic executed by these connections.

In fact, in a very complex high concurrency system, it is very difficult to completely prevent deadlocks, or to know what kind of users will experience deadlocks in special access orders. Therefore, the application should handle the deadlock error "1205" to complete the processing of the original business logic or clear the aftermath.

 

 

 

 

 

 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.