Factors that affect SQL Server lock application and release
1. transaction isolation level selection
The higher the transaction isolation level, the higher the isolation level and the lower the concurrency level. If you select a relatively high isolation level, SQL will inevitably apply for more locks, and the holding time will also increase.
Therefore, when designing an application, you must negotiate with the user and try to select the default isolation level (Read committed)
2. Transaction length and complexity
The length and complexity of the transaction determine how long the transaction will last inside the SQL statement, and determine the number of tables and indexes that the SQL statement will apply for and hold the lock at the same time. The simpler the transaction, the less
It is prone to congestion and deadlock. Therefore, we must discuss this with the user and try to avoid doing many things in a transaction.
3. Considering the overall concurrency of an application, a single transaction cannot process too much data at a time.
The performance of an application is not only to measure the processing speed of a single connection, but also to measure the overall average speed of concurrent processing. From the perspective of the connected individual, it is possible
Data can be processed quickly once. However, if the processed data volume is large, it will affect other connections to access the same object at the same time. Therefore, if an application has high concurrency requirements,
The data volume processed by a single transaction must be strictly controlled. If any transaction operation needs to access or modify a large amount of data in the table, it is best to adjust it to run when there are few concurrent users.
4. design appropriate indexes for statements on tables
Appropriate indexes allow SQL to locate the data to be processed while reading as little data as possible. If no suitable index is available, when SQL statements perform select, update, and delete operations,
It will apply for a lock that is much larger than the target data volume to be processed, resulting in blocking or deadlock. In this case, you can increase the concurrency by adding an index. At the same time, SQL is performing update, insert, and delete operations.
And apply for a lock on all associated indexes. From this perspective, the more indexes, the more locks generated, and the higher the chance of blocking and deadlock.
Summary:
What the database designer needs to do is to ensure that there are enough indexes to prevent the statements from scanning the entire table, but also remove those indexes that do not contribute much to the statement operation. You cannot add indexes to a table.