"Go" Transactions and locks in SQL Server

Source: Internet
Author: User
Tags error status code savepoint set time sql error try catch

Understanding Transactions and Locks

Transactions: The essential tool for maintaining logical data consistency and recoverability.

Lock: When multiple users access the same database resource, a mechanism for the priority authority management of access, without which the transaction may be a mess, cannot guarantee the safe and proper reading and writing of data.

Deadlock: One of the heavyweight killers of database performance, and deadlocks are caused by the preemption of data resources between different transactions.

Do not understand the sound, quite magical, understand the feeling I'm talking about, below take you a good taste of their demeanor, sniff under their crazy Sao.

First, business--concept, classification.

With a sentence in the Hua Zi seamless way to explain to you: not to the end, back to the original point.

To illustrate:

In a transaction, you write 2 SQL statements, one is to modify the order table status, one is to modify the inventory table Inventory-1. If an error occurs while modifying the status of the order table, the transaction can be rolled back, the data will revert to the state of the data that was not modified, and the following modifications will not be performed, thus ensuring that your relationship logic is consistent and secure.

This is the way the transaction is, the stubborn temper, or all the execution, or all do not execute, back to the original data state.

Written explanations: Transactions are atomic, consistent, isolated, and persistent.

    • Atomicity: A transaction must be a unit of automatic work, either all executed, or none of the execution.
    • Consistency: At the end of the transaction, all internal data is correct.
    • Isolation: When multiple transactions are concurrent, each transaction does not interfere with internal data, and it processes data that is before or after another transaction.
    • Persistence: After a transaction commits, the data is permanent and cannot be rolled back.

In SQL Server, however, transactions are divided into 3 common types of transactions:

    • Autocommit transactions: Is the default SQL Server transaction mode, each SQL statement is treated as a transaction, you should not have seen, an update to modify the 2-field statement, only 1 fields repaired and the other field has not been modified.
    • Explicit transactions: T-SQL indicates that the start of a transaction begins with begin Transaction and commits the transaction by commit Transaction, Rollback Transaction the end of the rollback transaction.
    • Implicit transaction: Use Set implicit_transactions on to open the implicit transaction mode without begin Transaction to open the transaction, when a transaction ends, this mode automatically enables the next transaction, with only commit Transaction Commit transactions, Rollback Transaction rollback transactions.
Application of an explicit transaction

There are four of common statements.

    • Begin Transaction: Marks the beginning of a transaction.
    • Commit Transaction: The transaction has been successfully executed and the data has been processed properly.
    • Rollback Transaction: Error during data processing, rollback to data state before processing, or rollback to a savepoint inside the transaction.
    • Save Transaction: The save point inside the transaction is that the transaction can not be rolled back and only rolled back here, guaranteeing that the transaction is not error-prone.

Above is the heart, the following for you to a moves, to see carefully.

1---Turn on transaction 2 BEGIN TRAN 3-error catch mechanism, watch it, there are some. and can be nested. 4 begin try   5    --statement correct 6    insert into Lives (EAT,PLAY,NUMB) VALUES (' pork ', ' soccer ', 1) 7    --numb to int type, error 8    Insert into Lives (EAT,PLAY,NUMB) VALUES (' pork ', ' soccer ', ' abc ') 9    --the statement correctly    inserts into lives (EAT,PLAY,NUMB) VALUES (' Dog meat ', ' basketball ', 2) one end TRY12 begin catch13    Select Error_number () as ErrorNumber,  --Error code           error_severity () as ErrorSeverity,  --error severity level, level less than ten try catch capture less than           error_state () as ErrorState,  -Error status code:           error_ Procedure () as Errorprocedure,-the name of the stored procedure or trigger in which the error occurred.           Error_line () as ErrorLine,  --Error line number           error_message () as ErrorMessage  --Error specific information    if ( @ @trancount >0)--Global variable @ @trancount, transaction on this value +1, he used to determine that there is an open transaction       rollback TRAN  ---Because of an error, here roll back to the beginning, the first statement is not inserted successfully. 3 End Catch22 if (@ @trancount >0) at commit Tran  --If the table is successfully lives, there will be a list of data in this case. 24 25-The table itself is an empty table, an ID, numb is of type int, and the other is nvarchar type-select * from Lives

---start transaction begin tran--error catch mechanism, watch it, there are some. and can be nested. Begin try       --statement correctly   insert into lives (EAT,PLAY,NUMB) VALUES (' pork ', ' soccer ', 1)       --Add to SavePoint save   Tran Pigonein   --numb is of type int, error   insert into lives (EAT,PLAY,NUMB) VALUES (' pork ', ' soccer ', 2)   --the statement is correctly   insert into lives (Eat, PLAY,NUMB) VALUES (' Dog meat ', ' basketball ', 3) end Trybegin catch   Select Error_number () as ErrorNumber,  --error code          ERROR_ Severity () as ErrorSeverity,  --error severity level, level less than ten try catch captures less than          error_state () as ErrorState,  --Error status Code          Error_procedure () as Errorprocedure,-the name of the stored procedure or trigger in which the error occurred.          Error_line () as ErrorLine,  --the line number where the error occurred          Error_message () as ErrorMessage  --The specific information of the error   if (@@ TRANCOUNT>0)--Global variable @ @trancount, transaction on this value +1, he used to determine that there is an open transaction      rollback TRAN   ---Because of the error, here ROLLBACK transaction to the origin, the first statement did not insert successfully. End Catchif (@ @trancount >0) Rollback TRAN Pigonein-If successful lives table, there will be 3 data. --The table itself is empty table, ID, numb int type, other nvarchar type select * from lives

Using Set Xact_abort

Set XACT_ABORT on/off to specify whether to roll back the current transaction, if the current SQL error is on, roll back the entire transaction, or off if SQL error rolls back the current SQL statement, the other statements run read-write databases as usual.

Note: Xact_abort is only useful for errors that occur at run time, and if there is a compile-time error in the SQL statement, then he fails.

Delete Lives  --empty data set Xact_abort Offbegin Tran     --statement correctly   insert into lives (EAT,PLAY,NUMB) VALUES (' pork ', ' football ', 1)      --numb is an int type, error, if 1234. That big data replaced with ' 132dsaf ' xact_abort will fail   insert into lives (EAT,PLAY,NUMB) VALUES (' pork ', ' soccer ', 12345646879783213)   --the statement is correct   insert into Lives (EAT,PLAY,NUMB) values (' Dog meat ', ' basketball ', 3) Commit transelect * from lives

When on, the result set is empty because the run is an excessive data overflow error, and the entire transaction is rolled back.

The whole business has a deadlock.

Then do: Open two query windows, put the following statements, respectively, into 2 query windows, in 5 Seconds to run 2 transaction modules.

BEGIN TRAN   Update lives set play= ' badminton '  waitfor delay ' 0:0:5 '    update dbo. Earth set animal= ' Tiger ' commit tran
BEGIN TRAN   update Earth set animal= ' Tiger '   waitfor  delay ' 0:0:5 '-wait 5 seconds to execute the following statement  update lives set play= ' badminton ' Commit Transelect * from Livesselect * from Earth

Why, below we look at the lock, what is the lock.

Concurrent transaction success or failure is attributed to lock-lock

In the case of multiple users accessing the same data resource concurrently with a transaction, the following data errors are caused.

    • Update lost: Multiple users at the same time to update a data resource, will inevitably produce overwritten data, resulting in data read and write exceptions.
    • Non-repeatable READ: If a user reads one piece of data multiple times in one transaction while another user updates the data at the same time, the first user is inconsistent with the data read multiple times.
    • Dirty read: The first transaction reads the data table that the second transaction is updating, and if the second transaction has not yet been updated, then the first transaction will read half the data that has been updated, half of which has not been updated, and this data is meaningless.
    • Phantom read: After the first transaction reads a result set, the second transaction, the result set by row add and delete operations, but the first transaction again in the result set query, the data discovery is missing or new.

Locking, however, is the solution to these problems, and his presence makes it possible for a transaction to operate on his own block of data, while another transaction cannot be involved in these chunks. This is called locking.

Locking from a database system can be broadly divided into 6 types:

    • Shared Lock (S): You can also ask him to read the lock. You can read data concurrently, but you cannot modify the data. This means that when there is a shared lock on the data resource, all transactions cannot modify the resource until the data is read and the shared lock is released.
    • Exclusive Lock (X): You can also call him to lock, write lock. That is, if you are adding and altering data resources, you do not allow any other transaction to manipulate the resource until the exclusive lock is released, preventing multiple operations on the same resource at the same time.
    • Update Lock (U): Prevent deadlock in the lock mode, two transactions to a data resource read first in the case of modification, use of shared and exclusive locks sometimes deadlock phenomenon, and the use of update locks can avoid the occurrence of deadlocks. A resource's update lock can only be assigned to one transaction at a time, and if the resource needs to be modified, the update lock becomes an exclusive lock, or it becomes a shared lock.
    • Intent Lock: SQL Server needs to acquire shared locks, exclusive locks, and update locks on the underlying resources in the hierarchy (such as rows, columns). For example, an intent shared lock is placed at the table level, which means that the transaction will use a shared lock on the table's page or row. Placing an intent lock on a row of a table prevents other transactions from acquiring other incompatible locks. Intent locks can improve performance because the data engine does not need to detect every row in a resource's column to determine whether a compatible lock can be obtained for that resource. The intent lock consists of three types: Intent Shared lock (IS), intent exclusive Lock (IX), intent Exclusive shared lock (SIX).
    • Schema Lock: The lock that is accessed concurrently when the table structure is prevented from being modified.
    • Bulk Update Lock: Allows multiple threads to insert bulk data concurrently into the same table, and not allow other processes to access the table while it is being loaded.

The mutual compatibility between these locks, that is, whether they can exist simultaneously.

The existing licensing model

The requested mode

Is

S

U

Ix

SIX

X

Intent Sharing (IS)

Is

Is

Is

Is

Is

Whether

Share (S)

Is

Is

Is

Whether

Whether

Whether

Update (U)

Is

Is

Whether

Whether

Whether

Whether

Intent Exclusive (IX)

Is

Whether

Whether

Is

Whether

Whether

Intent exclusive Sharing (SIX)

Is

Whether

Whether

Whether

Whether

Whether

Exclusive (X)

Whether

Whether

Whether

Whether

Whether

Whether

Lock compatibility specific See: http://msdn.microsoft.com/zh-cn/library/ms186396.aspx

Lock granularity and Hierarchy see: http://msdn.microsoft.com/zh-cn/library/ms189849 (v=sql.105). aspx

Dead lock

What is a deadlock, and why a deadlock occurs. I use the "transaction to lock the deadlock to the whole out" under the heading of the two transactions generated by the deadlock to explain should be more vivid point of image.

The example is this:

First transaction (called a): Update lives table--->> pause for 5 seconds---->> Update earth Table

Second transaction (called B): First update of earth table--->> pause 5 seconds---->> Update lives table

A deadlock occurs---execution transaction b within 5 seconds of executing transaction a----.

The process is like this:

    1. A Update lives table, request lives exclusive lock, success.
    2. b Update Earth table, request the Earth's exclusive lock, success.
    3. After 5 seconds
    4. A update Earth, request the Earth's row it lock, because B occupies the Earth's row it locks, waits.
    5. b Update lives, Request lives's exclusive lock, because a occupies the lives of the exclusive lock, wait.

In this way, waiting for each other to release resources, causing the resource to read and write congestion, is called the deadlock phenomenon, also known as blocking. And why, the above example is listed.

However, the database does not have an infinite waiting situation, because the database search engine will periodically detect this situation, once found that there is a situation, immediately select a transaction as a victim. Transaction, the data will be rolled back. A bit like two people in the cross-bridge, two people without brains are walking in the middle of the bridge, if not to fall, must have a person to return. This process of mutual waiting is a time-consuming and resource-intensive phenomenon, so we can avoid it.

Which one will be returned, as a victim, that we can control. Control Syntax:

Set Deadlock_priority  < level >

The priority level for deadlock processing is low<normal

You can also use numbers to handle identity levels:-10 to-5 for low,-5 to normal,-5 to 10 for high.

Reduce the occurrence of deadlocks, improve database performance

Deadlocks are time consuming resources, but in large databases the deadlock caused by high concurrency is unavoidable, so we can only make it less.

    1. Access database resources in the same order, the above example will not deadlock
    2. Keep the transaction short and try not to let a transaction handle overly complex read and write operations. Transactions are too complex, occupy more resources, processing time increases, easy to conflict with other transactions, increase the deadlock probability.
    3. Try not to require a user response in the transaction, such as modifying the new data after the entire transaction is committed, thus prolonging the time the transaction consumes resources and also increases the deadlock probability.
    4. Minimize the amount of concurrency in the database.
    5. Whenever possible, use partitioned tables, partitioned views, place data on different disks and filegroups, and distribute data that is stored in different partitions, reducing the waiting time for other transactions caused by placing locks in the table.
    6. Avoid data operations that take a long time and complicate relational tables.
    7. With a lower isolation level, it is less time to use a lower isolation level than to hold a shared lock with a higher isolation level. This reduces lock contention.

Refer to: http://msdn.microsoft.com/zh-cn/library/ms191242 (v=sql.105). aspx

To view lock activity conditions:

--View lock Activity Select * FROM sys.dm_tran_locks--view transaction activity status DBCC OPENTRAN

Refer to: http://msdn.microsoft.com/zh-cn/library/ms190345.aspx

Setting isolation levels for transactions

The so-called thing isolation level is the read depth level of the concurrent transaction to the same resource. Divided into 5 kinds.

    • READ UNCOMMITTED: This isolation level is the lowest, can read to a transaction is processing the data, but the transaction has not yet committed, this level of reading is called Dirty read.
    • Read Committed: This level is the default option, cannot be dirty read, cannot read the transaction is processing uncommitted data, but can be modified.
    • REPEATABLE READ: Cannot read the data that the transaction is processing, nor can it modify the data before the transaction data.
    • Snapshot: Specifies that the transaction starts with a snapshot of the submitted data, so the current transaction can only see the changes made to the data before the transaction begins.
    • Serializable: The highest transaction isolation level, only the data before the transaction is visible.
--Grammar Set TRAN Isolation level < levels >

READ UNCOMMITTED example of isolation level:

BEGIN TRAN   set deadlock_priority low  update Earth set animal= ' Tiger '   waitfor  delay ' 0:0:5 '-- Wait 5 Seconds to execute the following statement rollback TRAN

Open another query window to execute the following statement

Set TRAN Isolation LEVEL read Uncommittedselect * from Earth  -read data is the data being modified, dirty read waitfor  delay ' 0:0:5 '  -- Data has been rolled back after 5 seconds select *  from Earth--data after rollback

Read Committed example of isolation level:

BEGIN TRAN   update Earth set animal= ' Tiger '   waitfor  delay ' 0:0:10 '-wait 5 seconds to execute the following statement rollback TRAN
Set TRAN Isolation LEVEL read Committedselect * from Earth---Get no tiger, can't dirty read update Earth set animal= ' Monkey 1 '   --can modify waitfor
   delay ' 0:0:10 '  --10 seconds after the previous transaction has rolled back select *  from Earth-modified data, not monkeys

The rest of the levels, not listed, you understand it.

Setting the Lock timeout time

In the event of a deadlock, the database engine automatically detects the deadlock and resolves the problem, but it is passive and can only wait for processing after a deadlock occurs.

However, we can also take the initiative, set the lock timeout time, once the resource is locked block, more than the set lock time, blocking statements automatically cancel, release resources, reported 1222 error.

Good things generally have two sides, tuning at the same time, there is his shortcomings, that is, once the time, the statement cancellation, release resources, but the current error transaction, will not rollback, resulting in data errors, you need to catch 1222 errors in the program, the program handles the logic of the current transaction, so that the data is correct.

--View timeout time, default is -1select @ @lock_timeout--Set time-out time set lock_timeout 0--for 0 o'clock, that is, once the resource lock, immediately error, not waiting, the current transaction does not roll back, set the times to be careful with the funeral Ah, You can't hold on.

Read carefully, hope to share a little something for you, thank you, over.

"Go" Transactions and locks in SQL Server

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.