SQL Server transactions and locks, SQL Server

Source: Internet
Author: User

SQL Server transactions and locks, SQL Server

I. Overview

In terms of databases, for non-DBA programmers, transactions and locks are a major difficulty. In view of this difficulty, this article attempts to discuss with you through text and text.

The "talk about SQL Server transactions and locks" topic is divided into two articles, the previous article focuses on transaction and transaction consistency issues, and briefly mentions the types of locks and the lock control level.

In the next article, we will discuss the lock mechanism in SQL Server, lock control level and deadlock issues.

Transaction 2

1. What is a transaction?

Preview many books. For the definition of transactions, different authors of different documents have a slight difference, but they are roughly unified. we abstract them:

Transaction: a single unit of work that encapsulates and executes one or more operations. In SqlServer, the definitions are displayed in two ways: Display definition and implicit definition.
Based on the above definition, we can split the transaction anatomy into the following points:

(1) A transaction is a single unit of work. This definition makes the transaction have the ACID attribute.

(2) transactions are encapsulated, for example, basic CRUD operations.

-- Transaction Begin Tran SELECT * FROM UserInfo insert into UserInfo VALUES ('Alan _ beijing ', 35) UPDATE UserInfo SET Age = 31 WHERE UserName = 'Alan _ beijing 'DELETE UserInfo WHERE UserName = 'Alan _ beijing' Commit Tran

(3) A transaction can encapsulate a single operation or multiple operations (when encapsulating multiple operations, pay attention to the difference with batch processing)

(4) In SqlServer, transaction definitions are classified into display definitions and implicit definitions.

Display definition: Start with Tran in Tran, where Commit transaction is Commit Tran and RollBack transaction is RollBack Tran. For example, we insert two operation statements in a transaction.

-- Display the definition transaction Begin TranINSERT INTO UserInfo VALUES ('Alan _ Shanghai', 30) insert into UserInfo VALUES ('Alan _ beijing ', 35) Commit Tran

Implicit definition: If no transaction definition is displayed, SQL Server treats each statement as a transaction by default (automatically commits the transaction after each statement is executed)

2 ACID properties of transactions

A transaction is a single unit of work. This definition gives it ACID attributes. ACID attributes refer to Atomicity, Consisitency, Isolation, and Durability ).

(1) Atomicity (Atomicity)

Atomicity means that a transaction must be an atomic unit of work, that is, encapsulation operations on the transaction, either all or all. In the following cases, the transaction is canceled or rolled back...

A. before the transaction is committed, if the system fails or restarts, SQL Server will cancel all operations performed in the transaction;

B. if an error occurs during transaction processing, SQL Server will automatically roll back the transaction, but there are a few exceptions;

C. Some minor errors will not cause automatic transaction rollback, such as primary key conflicts and lock timeout;

D. You can use the error processing code to capture some errors and take corresponding operations, such as recording errors in logs and rolling back transactions;

(2) Consisitency)

Consistency mainly refers to data consistency, that is, the main object is data. From a macro perspective, data must be consistent for a certain period of time. From a micro perspective, data must be consistent at a certain time point. For example,

If two transactions A and B operate on the same table, A writes data to the table, and B reads data from the data table, you can guess that the data read by B has three coarse granularity possibilities:

The first possibility is that A has not written data to the data table;

The second possibility is that A has written part of the data to the data table but has not finished writing it;

The third possibility is that A has written data to the data table;

As a result, the transaction is inconsistent.

Transaction consistency issues such as lost updates, dirty reads, non-repeated reads, and Phantom reads may occur. The following describes the transaction consistency issues in detail.

(3) Isolation)

Isolation refers to the control of the boundary of each transaction and the data access mechanism when two or more transactions operate on the same boundary resource, ensure that the transaction can only access data at the expected consistency level.

In SQL Server, the locking mechanism is generally used for control. We will discuss it in detail below.

(4) Durability)

When we operate a data table, we generally perform the following two steps in sequence:

Step 1: write operations on the data table to the transaction log of the database on the disk (persistent in the disk transaction log );

Step 2: After completing step 1, write the operations on the data table to the Data Partition of the database on the disk (persistent to the database partition on the disk );

For the above two steps, let's think about the possible problems:

Question 1: How can the database engine be implemented if a system failure (such as system exception or system restart) occurs before the first step is completed?

Because the first step has not been completed, the commit command has not been recorded in the transaction log of the disk. At this time, the transaction is not persistent. After the system fails, the SQL Server

The transaction logs of each database are checked for recovery (recovery is generally divided into the redo and cancel phases). At this time, the recovery process is the redo phase, that is, the commit command has not been recorded in the transaction log of the disk,

The database engine will cancel all the modifications made by these transactions, and the process will become rollback.

Question 2: How does the database engine do if the system fails (such as a system exception or system restart) after the first step is completed but the second step is not completed?

After completing the first step, the commit command is recorded in the transaction log of the disk. no matter whether the data operation is written to the Data Partition of the disk or not, the transaction is persistent. After the system fails, the SQL Server

The transaction logs of each database are checked for recovery (recovery is generally divided into the redo and cancel phases). At this time, the recovery process is the redo phase, that is, because data modification has not been applied to transactions in data partitions,

The database engine will redo all the modifications made by these firms, and this process will also become a roll-forward.

Consistency Issues arising from the isolation and isolation levels of transactions

1. uncommitted read (read uncommitted)

Read uncommitted (read uncommitted) refers to reading UNCOMMITTED data. The data generated at this time is inconsistent, which is called dirty data READ.

1.1 why does uncommitted read result in dirty Data Reading?

Uncommitted read is the lowest isolation level. For transactions running at this isolation level, read operations do not require a shared lock. If read operations do not require a shared lock, there will be no conflict with the transaction operation that holds the exclusive lock,

That is to say, at this transaction isolation level, read operations can be performed at the same time as write operations, and are not mutually exclusive. Read operations can read and write uncommitted modifications, resulting in data inconsistency, in this case, we call

Dirty Data Reading.

1.2 dirty Data Reading

1.3 SQL demonstrate dirty Data Reading

2 READ COMMITTED)

Read committed refers to reading only the data of COMMITTED transactions. It is the lowest isolation level to prevent dirty data reads and is also the default isolation level of SQL Server, it requires that the read operation obtain the shared lock

Operation can be performed to prevent uncommitted reads. Although committed reads can prevent dirty reads, the consistency of non-repeated reads is inevitable.

2.1 Why does committed read prevent dirty Data Reading?

Committed read only allows read of committed data of the transaction. It requires that the read operation obtain a shared lock before it can be operated with all its heart. The shared lock of the read operation and the exclusive lock of the write operation are mutually exclusive, the two are mutually exclusive, so read Operations

When reading data, you must wait for the write operation to complete before obtaining the shared lock and then reading the data. At this time, the read data is committed, this prevents dirty Data Reading.

2.2 SQL demonstrate committed read

2.3 why does the committed read cause non-repeated read?

We know that although committed reads can obtain the shared lock, once the read operation is complete, the shared lock on the resource will be released immediately (this operation will not retain the shared lock during the transaction duration ), in this way, a problem occurs,

That is, when there is no shared lock between read operations on the same data resources within a transaction, the resource is locked, and other transactions can change the data resources between two read operations, the read operation may be different each time.

Value.

2.4 The illustration cannot be read repeatedly

3. REPEATABLE READ)

To prevent repeated reads, SQL server adopts the isolation level upgrade method to upgrade committed reads to repeated reads. At the Repeatable read isolation level, read operations in transactions not only obtain the shared lock,

In addition, the shared lock is kept until the transaction is completed. before the transaction is completed, it is impossible for other transactions to obtain the exclusive lock to modify the data. In this way, deduplication is achieved, prevents repeated read Operations

Data inconsistency. Repeatable reading not only solves the inconsistency problem of non-repeated read data, but also solves the problem of data loss and update. However, repeatable reads also have problems, such as deadlocks and Phantom reads.

3.1 SQL demo re-reading

3.2 What is a loss update?

In the isolation level lower than the Repeatable read, two transactions no longer hold any lock for this resource after reading data. At this time, both transactions can update this value,

As a result, the value of the last transaction update overwrites the value of the previous transaction update, resulting in data loss, which is called the loss update.

3.3 missing updates

4. SERIALIZABLE)

4.1 what is Phantom read?

We know that at the Repeatable read isolation level, the shared lock held by the read transaction is kept until the transaction is completed, but the transaction only locks the data resources found during the first run (for example, rows ),

Instead of locking other rows beyond the query result range (in fact, when controlling transactions, there are database architecture levels, tables, pages, and rows ). Therefore, before the second read in the same transaction

When a new row is inserted, and the new row can meet the query filtering conditions of the read operation, these new rows will also appear in the results returned by the second read operation. These new rows are called Phantom sub-statements, it is also called phantom read.

4.2 graphic phantom reading

4.3 how to solve phantom read?

In SQL SERVER, more advanced SERIALIZABLE can solve this problem.

4.4 what is SERIALIZABLE )?

In most cases, SERIALIZABLE isolation-level processing methods are similar to repeated processing methods, except that SERIALIZABLE isolation levels

A new content is added-logically, this isolation level locks the read operation to the entire range of keys that meet the query search conditions, which means that the read operation not only locks the query

The existing rows of the condition also lock the rows that may meet the query Search Condition in the future.

5 SNAPSHOT

.

Iv. Summary of transaction isolation levels

The following table summarizes the consistency between each isolation level and logic, and checks the relationship between conflict and row version control.

Five locks

1. Two concurrency control models

There are two main types of concurrency control models: Pessimistic Control Model and optimistic control model.

(1) pessimistic control model: This model assumes that multiple transactions always operate on the same resource (read/write), that is, it assumes that a conflict always occurs. In SQL Server, transactions are used

Isolation Level Control (also known as lock control ). Generally, the transaction is controlled before a conflict occurs, also called pre-control;

(2) optimistic control model: this model is opposite to the pessimistic control model, that is to say, this model always assumes that there are no or few transactions in the system that operate on the same resource (read/write)

That is, it is assumed that a conflict does not occur or rarely occurs. In SQL Server, row version control is used for processing. It is generally used to control transactions after a conflict, also known as an afterevent.

Control;

2. What are the types of locks and locks?

2.1 What is locking?

Locking refers to a method used to ensure data consistency during concurrent operations. In SQL Server, the lock mechanism and transaction isolation level are used to control data consistency,

2.2 lock types

The following four types of locks are commonly used: Shared locks, intention locks, update locks, and exclusive locks.

(1) shared lock: in SQL SERVER, when a transaction wants to read data, it needs to obtain a shared lock.

(2) Intention lock: in SQL SERVER, intention lock is not an independent lock, but its main function is to obtain the control granularity of the lock (for example, page, table, rows ).

(3) Update lock: in SQL SERVER, the update lock is not an independent lock, but a Mixed Lock consisting of a shared lock and an exclusive lock. Its isolation level is higher than the shared lock,

Lower than the exclusive lock, the update lock can prevent deadlocks generated by the lock upgrade.

(4) exclusive lock: in SQL SERVER, an exclusive lock must be obtained when the transaction needs to write data, refine data, and delete data.

3 lock control granularity

In SQL SERVER, locks can control tables, pages, rows, and other resources.

Summary

The above is a small series of SQL Server transactions and locks, I hope to help you, if you have any questions, please leave a message, the small series will reply to you in a timely manner. Thank you very much for your support for the help House website!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.