SQL Server transaction and concurrency BASICS (to be continued)

Source: Internet
Author: User
Tags repetition

Transaction and concurrency

Before learning things and concurrency, we should first understand two concepts:

1. What are things?

Transactions are the basic unit of work in SQL Server. It usually consists of several SQL commands for reading and updating databases, but these operations are not considered final until a commit command is issued.

2. What is concurrency?

Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. Since it is a capability, the concurrency of a system can be divided into strengths and weaknesses. In this case, how can we determine the strength of a system and its distribution?

Generally, the more concurrent user processes a system can activate without mutual interference, the more concurrency the system has.

Cause analysis that may affect concurrency:

When the process that is changing the data blocks other processes from reading the data, or when the process that reads the data blocks other processes from changing the data, the concurrency will decrease. In addition, when multiple processes attempt to change the same data at the same time and fail to succeed without sacrificing data consistency, concurrency will also be affected. For the understanding of concurrency, we and Rong Yi thought of the Ministry of Railways's ticket booking website. Due to insufficient concurrency processing capabilities, the ticket booking peak may crash, which has a negative impact on online ticket booking. It can be seen that the database system of a large website is necessary to improve the processing concurrency.

 

Method for processing concurrency:

Sqlserver 2008 provides two methods: optimistic and pessimistic. We can use the following command to specify:

Settransaction isolation level (isolation level of transactions.

Differences between the two:

In both models, conflicts may occur when two processes attempt to modify the same data at the same time. The difference between the two models is whether the conflict is avoided before it appears or whether it is processed in a certain way after it appears.

Pessimistic concurrency model:

The default behavior of pessimistic concurrent SQL Server is to obtain the lock to block access to the data being used by another process. Pessimistic concurrency assumes that the system has enough data modification operations, so any given read/write operation may be affected by another user's data modification operations. Pessimistic concurrency prevents conflicts by obtaining the lock on the data being read, so that other processes cannot modify the data. In other words, in a pessimistic model, the reader blocks the writer and the writer blocks the reader.

 

Optimistic Concurrency model:

Optimistic Concurrency assumes that the system has enough data modification operations, so no single thing is likely to be the data that another thing is modifying. The default behavior of Optimistic Concurrency is to use row version control to allow the data reader to view the data status before modification. The old data row version is saved, so the process that reads data can see the data when the process starts reading the data, and will not be affected by the process that is making any changes to the data. In other words, the reader does not block the writer or the writer does not block the reader. However, the writer can and will block the writer, which is also the cause of the conflict. At this time, SQL Server generates an error message when a conflict occurs, but the application is responsible for influencing the error.

 

Transaction processing:

ACID properties

Atomicity: atomicity ensures that each thing is processed or not processed as a whole unit, either in whole or in whole:

For example:

Simulate bank transfer

Begintrabsaction tran1

Updatetb

Setaccount = Account + 100 Where accountid = 'A'

Updatettb

Setaccount = Account-100 Where accounted = 'B'

Committransaction tran1

The two statements in the preceding simple transaction tran1 are either inserted successfully or both failed, and there is no failure in one successful statement.

In other words, atomicity ensures that all statements inside a thing are viewed as an atom, and atoms are inseparable.

Consistency: consistency ensures that the system is not allowed to enter an incorrect logical state. All constraints and rules are followed even if a system failure occurs.

For example, the above imitation of bank transfers ensures that the consistency of transactions ensures that the account is increased by 100 yuan while the B account is reduced by 100 yuan. This prevents account B from reducing the corresponding funds caused by system failures when account a increases by 100. This ensures data consistency.

Isolation: Isolation isolates concurrent transactions from updates of other unfinished transactions. In the above example, another thing cannot see the work in this ongoing thing.

Durability: after a transaction is committed, the persistence attribute of SQL Server ensures that the transaction effect persists, even if a system failure occurs.

Dependence of things

Loss update: When two processes read the same data and operate on the data, they change the value and attempt to update the original data to a new value, resulting in data loss. For example:

Both Clerk A and Clerk B receive the delivered parts. They checked the current inventory and saw 25 parts in the inventory. Clerk A received 50 parts, so he wrote 50 + 25 to the current inventory. Clerk B received 25 parts, so he wrote 20 + 25 to the database, overwriting 50 added by Clerk A, resulting in data update loss.

Dirty read: when the process reads uncommitted data. For example, clerk A updated the original 25 parts to 75. Prior to his submission, A salesperson saw that the current inventory was 75 and promised to send 60 parts to the customer the next day. If clerk A finds A defect in this batch of parts and returns it to the supplier and updates the inventory to 25, it is actually A dirty read by the sales staff and takes action based on the unsubmitted data. Dirty reads are not allowed by default.

Note: The data update process cannot control whether another process can read data before the first process is submitted. It is determined by the process of reading data whether to read data that cannot be guaranteed to have been committed.

Non-repetition degree: if the same process obtains different values from reading the same data in two independent read operations, such read operations cannot be repeated.

 

Phantom read:

 

Isolation level of things:

Uncommitted read:

Committed read:

Repeatability:

Snapshot:

Serializable:

The following table lists the permitted behaviors at each isolation level:

Isolation level dirty read non-repetition phantom read Concurrency Control

Uncommitted read y pessimistic

Committed read (locked) n y pessimistic

Read (snapshot) n y optimistic

Repeatability n y pessimistic

Snapshot N optimistic

Serializable N pessimistic

Note: Y in the preceding table indicates yes, and N indicates no.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.