Transaction and lock detail resolution in SQL Server _mssql

Source: Internet
Author: User
Tags commit rollback

A few days ago "read" "Sqlserver2005 Advanced Program Design" and "The introduction of SQL Server 2008 Programming Classic (3rd edition)," The two translations of the Chinese version of the book. found that the directory structure is roughly the same, the content of the explanation is almost the same. There is suspicion of plagiarism. See the "Transaction and lock" in the one, found that even the small examples, tables are identical. Ha ha... I don't want to make too many comments on this kind of books. The translation of the books in the country. To be honest, most of the translations are a bit blunt. And those "original works". Most of them are plagiarism, empty talk. As far as the Microsoft technology System is concerned, if you copy it directly from the MSDN or online books, and then just post a few pages of code, and then you can publish the sale, then maybe I can do the writing, because that's really not the level.

Of course, there are fine works, but few and hard to find. Well, to get down to business and lock up, this is probably something more difficult to understand in the database.


Dirty reading, non-repetition reading, phantom reading

(1) Dirty reads: dirty reading means that when a transaction is accessing the data and the data has been modified and the modification has not been submitted to the database, another transaction accesses the data and then uses the data.

For example:

John's salary is 5000, and his salary is changed to 8000 in transaction a, but transaction a has not yet been submitted.

Meanwhile

Transaction B is reading the John's salary and reads to John for a salary of 8000.

Then

Transaction A has an exception, and the transaction is rolled back. John's salary was rolled back to 5000.

At last

Transaction B reads a John wage of 8000, which is dirty data, and transaction B makes a dirty read.

(2) Non-repeatable reading: refers to the same data read multiple times within a transaction. When this transaction is not finished, another transaction accesses the same data. Then, between the two read data in the first transaction and the modification of the second transaction, the data read by the first transaction two times may be different. This makes it happen that the data read two times within a transaction is not the same, so it is called a non repeatable read.

For example:

In transaction A, the salary read to John is 5000, the operation is not completed, and the transaction has not been committed.

Meanwhile

Transaction B changed the salary of John to 8000 and submitted a transaction.

Then

In transaction A, read the John's salary again, at which point the salary becomes 8000. The result of a two read before and after a transaction results in a non repeatable read.

(3) Phantom reading: a phenomenon that occurs when a transaction is not executed independently, for example, the first transaction modifies the data in a table that involves all the rows of data in the table. At the same time, the second transaction modifies the data in this table, which is inserting a row of new data into the table. So, it's going to happen later. The user of the first transaction discovers that there are no modified data rows in the table, as if there were hallucinations.

For example:

At present, there are 10 employees with a salary of 5000, and the number of transaction a reading of all salaries is 5000 to 10.

Now

Transaction B Inserts a record of 5000 of the salary.

This is, transaction a reads again the employee with a salary of 5000, record is 11 person. Phantom reads are generated at this time.

The key to non-repeatable reads is to modify:
The same condition, the data you read, read it again and find the value is different.
The focus of phantom reading is new or deleted:
The same conditions, the 1th and 2nd readings of the number of records are not the same

Second, exclusive lock, shared lock, update lock, optimistic lock, pessimistic lock

1, the lock two kinds of classification way

(1) From the point of view of database system, the lock is divided into the following three kinds of types:

• Exclusive Lock (Exclusive lock)
A resource with exclusive lock locks is allowed only by programs that are locked, and no other action is accepted. When you perform a data Update command, which is the INSERT, UPDATE, or delete command, SQL Server automatically uses exclusive locks. However, you cannot add exclusive locks to an object when there are other locks on it. Exclusive locks are not released until the end of the transaction.

• Share lock (Shared lock)
A resource with a shared lock lock can be read by another user, but other users cannot modify it. When the SELECT command executes, SQL Server typically shares lock locks on the object. The shared lock is immediately freed when a data page with a shared lock is usually read.

• Updating locks (update lock)
Update locks are created to prevent deadlocks. When SQL Server prepares to update data, it first updates lock locks on the data object, so that the data cannot be modified but can be read. When SQL Server determines that the update data operation is to be performed, it automatically wraps the update lock to an exclusive lock. However, you cannot update lock locks on objects when there are other locks on them.

(2) from the programmer's point of view, the lock is divided into the following two types:

• Pessimistic lock (pessimistic lock)
A pessimistic lock, as its name implies, is conservative in that the data is modified by the outside world (including the other transactions of the system, as well as transactions from the external system), so that the data is locked during the entire data processing process. Pessimistic lock implementation, often rely on the database to provide the lock mechanism (and only the database layer to provide the lock mechanism to truly ensure the exclusive access to data, otherwise, even in this system to achieve the lock mechanism, can not guarantee that the external system will not modify the data).

• Optimistic lock (optimistic lock)
In the case of pessimistic locks, the optimistic locking mechanism adopts a more relaxed locking mechanism. Pessimistic lock in most cases rely on the database lock mechanism to achieve the maximum degree of exclusivity to ensure the operation. But then comes the massive cost of database performance, especially for long transactions, which are often unsustainable.

And the optimistic lock mechanism solves this problem to some extent. Optimistic locks, most of which are based on data Versioning (version) recording mechanism implementations. What is the data version? That is, adding a version identity to the data, in a version solution based on a database table, typically by adding a "version" field to the database table. When you read out the data, read the version number together, and then when you update it, add one to this version number. At this point, the version data for the submitted data is compared to the current version information of the corresponding record in the database table, and is updated if the submitted data version number is greater than the current version number of the database table, otherwise it is considered to be an expired data.

2, how to use the lock in the database

First, start with the pessimistic lock. In many of the remaining databases, such as SQL Server, data locking usually takes the form of page-level locks, which means that the data in a table is a serialized update insertion mechanism, at any time the same table will only insert 1 data, the other want to insert the data to wait until the data inserted after the insertion. The result is performance degradation, when multiple users concurrent access, when a table for frequent operation, will find the response efficiency is very low, the database is often in a state of suspended animation. Oracle uses row-level locks, which lock only the data that they want to lock, and the rest of the data is irrelevant, so there's basically no impact when inserting data into the Oracle table.

Note: For pessimistic lock is more likely for concurrency, and generally in our applications with optimistic lock enough.

Oracle pessimistic locks need to take advantage of an existing connection, divided into two ways, from the point of view of the SQL statement, is a for update, one is the form of the for update nowait. For example, let's look at an example.

First set up the database table for the test:

CREATE TABLE TEST (id,name,location,value,constraint test_pk PRIMARY KEY (ID)) as SELECT deptno, Dname, loc, 1 from SCOTT.DEP T

Here we take a table of the Scott user of Oracle's sample and copy the data to our test table.

(1) For Update form introduction

Then we'll look at the for update locking method. We execute the SELECT FOR UPDATE statement as follows:

SELECT * FROM test where id = Ten for update

After this retrieval statement is locked, open another Sql*plus window to operate, and then the above SQL statement to execute a, you will find that Sqlplus seems to die there, as if the data can not retrieve the appearance, but also do not return any results, belong to the feeling of the card there. What is the reason for this time is that the select FOR UPDATE statement at the beginning of the first session locks the data. Because the mechanism locked here is the state of wait (as long as it does not mean that nowait is waiting), the current search in the second session (that is, the jammed sql*plus) is in the pending state. When the first session is last commit or rollback, the search result in the second session is automatically jumped out and the data is locked.

But if you have a second session where your retrieval statement looks like this: SELECT * FROM test where id = 10, which is not a statement that locks data with a for update, does not cause blocking.

(2) For UPDATE nowait form Introduction

In another case, when the database data is locked, that is, after executing the SQL for update, what do we do with the for update nowait in another session?

For example, the following SQL statement:

SELECT * FROM test where id = Ten for update nowait

Because this statement is developed in a nowait way to retrieve, so when the discovery data is locked in another session, it will quickly return ORA-00054 error, the content is busy, but specify nowait way to obtain resources. So in the program we can use nowait way to quickly determine whether the current data is locked, if the lock, we should take corresponding business measures to deal with.

Another problem here is what happens when we lock up the data and we update and delete the data.

For example, we let the first session lock the id=10 data, and we execute the following statement in the second session:

Update test set value=2 where id = 10

This time we found the UPDATE statement to be like a select FOR UPDATE statement, as well as stopping the card here, and when your first session releases the lock, update will not function properly. When you update the run, the data is locked by your update statement, this time as long as you have not commit to update, the other session can not lock the data to update and so on.

In short, pessimistic locks in Oracle use Oracle's connection to lock data. In Oracle, the performance loss with this row-level lock is very small, just pay attention to the program logic, do not give you accidentally into a deadlock. And because the data is locked in time, in the data submission does not call the conflict, can save a lot of annoying data conflict processing. The downside is that you have to always have a database connection, which means that you have to keep your database connections all the way through the lock to the last release.

As opposed to the pessimistic lock, we have an optimistic lock. Optimistic lock in the beginning also said that the first assumption will not cause data conflict, the final submission of data conflict detection.

In optimistic locks, we have 3 common practices to achieve:

• The first is to copy the entire data to the application when the data is made, and the data obtained before the submission is compared to the data in the current database and before the update begins.

When two data is found to be identical, it means no conflict can be committed, otherwise it is concurrency conflict, need to use business logic to solve.

• The second approach to optimistic locking is to use a version stamp, which is used in hibernate.

With a version stamp, you first need to create a new column on the database table where you have an optimistic lock, such as number, which increases by 1 when you update the data every time.

For example, there are 2 sessions that also operate on a piece of data. Both are taken to the current data version number 1, when the first session of the data update, at the time of submission to see the current data version is also 1, and the same version of their first access. is formally submitted and then the version number is increased by 1, at which point the current data version is 2. When the second session also updated the data submission, found in the database version of 2, and the beginning of this session to take the version number inconsistent, know that others have updated this data, this time to do business processing, such as the entire transaction rollback and so on operation.

When you use the version stamp, you can use the version stamp validation on the application side or trigger (triggers) on the database side to authenticate. However, the Trigger performance of the database is relatively large, so can be verified on the application side or recommended not to Trigger.

• The third approach is somewhat similar to the second one, which is adding a table column, but this time the column uses the timestamp type to store the last update of the data.

New data types can be used after oracle9i, that is, timestamp with the time zone type to do the timestamp. This timestamp data precision is the highest in Oracle's time type, accurate to microsecond (not yet to nanosecond level), in general, plus the database processing time and people's thinking action time, microsecond level is very very enough, in fact, as long as accurate to milliseconds or even seconds should have no problem.

Similar to the version stamp just now, it is also a comparison of the timestamp of the data in the current database at the time of the update submission and the timestamp that was taken before the update, if it is consistent, OK, or version conflict. You can also write the timestamp optimistic lock logic in trigger or stored procedures if you don't want to write your code in a program or because you can't write the code in an existing program for any other reason.

Iii. Five isolation levels of the transaction

The Isolation property supports a total of five transaction settings, as described below:

(1) DEFAULT

Use the isolation level set by the database (default), and the DBA defaults to determine the isolation level.

(2) read_uncommitted

This is the lowest level of isolation for a transaction, which allows a transaction to see uncommitted data for this transaction.

Dirty reads, non repeatable reads, Phantom reads (lowest isolation level, high concurrency performance) occur.

(3) read_committed

Guarantees that a transaction can be modified before it is read by another transaction. Another transaction cannot read data that is not committed by the transaction.

You can avoid dirty reads, but there will be no read-and-error-reading problems (locking the rows being read).

(4) Repeatable_read

You can prevent dirty reads from being read-only, but it will cause Phantom reads (lock all rows read).

(5) SERIALIZABLE

This is the highest-cost but most reliable transaction isolation level, and transactions are processed in sequential execution.

Ensure that all conditions do not occur (lock the table).

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.