SQL Server 2000+ado.net Implementation concurrency control

Source: Internet
Author: User
Tags commit count execution include insert interface net visual studio
ado|server| Control

1 Concurrency consistency issues

Common concurrency problems include: lost modifications, non-repetitive reading, dirty data reading, Phantom Reading (phantom reading is often classified as a type of non repeatable reading in some data).

1.1 Missing modifications

Let's take a look at an example to illustrate inconsistencies in the data that comes with concurrent operations.

Consider a sequence of events in the aircraft booking system:

    1. A ticketing point (a business) read out the balance of a flight ticket a, set a=16.
    2. b Ticketing Point (business) read out the same flight of the balance of tickets A, also 16.
    3. A ticket sales point to sell a flight, modify the balance a←a-1. So a is 15, write a back to the database.
    4. B Ticketing Point also sell a ticket, modify the balance a←a-1. So a is 15, write a back to the database.

As a result, two tickets were sold and the balance of the ticket in the database was reduced by only 1.

Summed up is: Two transactions T1 and T2 read the same data and modify, T2 submitted results destroyed T1 submitted results, resulting in T1 changes were lost. The problems and solutions mentioned in the previous article (2.1.4 Data deletion and update) are often addressed to such concurrency issues. But there are still a few types of problems that can not be solved by the above methods:

1.2 Non-REPEATABLE READ

Non-repeatable read is the transaction T1 read data, the transaction T2 perform the update operation, so that T1 can not reproduce the previous read results. Specifically, non-repeatable reading consists of three situations:

    • Transaction T1 reads a data, the transaction T2 it, and when transaction 1 reads the data again, it gets a different value from the previous one. For example, T1 reads b=100, T2 reads the same data B, modifies it, and writes b=200 back to the database. T1 in order to reread the read value b,b has been 200, inconsistent with the first read value.
    • After a transaction T1 some data records from the database in a certain condition, the transaction T2 deletes some of the records, and when T1 reads the data again on the same terms, some of the records are found to have disappeared in secret.
    • Transaction T1 a certain number of data records from the database, transaction T2 inserted some records, and when T1 again read the same conditions, more records were found. (This is also called Phantom Reading)

1.3 Read "Dirty" data

Reading "Dirty" data means that a transaction T1 modify a data and write it back to disk. Transaction T2 read the same data, T1 for some reason to be undone, then T1 the modified data restore the original value, T2 read the data is inconsistent with the data in the database, then T2 read the data is "dirty" data, that is, incorrect data.

The main reason for these three types of data inconsistency is that concurrent operations break the isolation of the transaction. concurrency control is to dispatch concurrent operations in the correct way, so that the execution of a user transaction is not interfered by other transactions, thus avoiding the inconsistency of data.

2 solution to concurrent consistency problems

2.2.2.1 Blockade (locking)

Blockade is a very important technology to realize concurrency control. The blockade is called a transaction T. Before a data object such as a table, record, and so on, the system is asked to lock it. The lock transaction T has some control over the data object, and the other transaction cannot update the data object until the transaction T releases its lock.

There are two basic types of closures: exclusive locks (Exclusive locks denoted for x locks) and shared locks (Share locks denoted for s locks).

The exclusive lock is also known as the write lock. If transaction t adds x locks to data object A, only T is allowed to read and modify a, and any other transaction can no longer lock any type of a, until T releases the lock on A. This ensures that other transactions cannot read and modify a until the lock on A is released by T.

Shared locks are also known as read locks. If the transaction t on the data object a plus s lock, then the other transaction can only be a plus s lock, but not the x lock, until T release a on the S lock. This ensures that other transactions can read a, but cannot make any changes to a before T releases the S lock on a.

2.2.2.2 Blockade Protocol

In the use of X-lock and S-lock these two basic blockade, the data object lock, also need to agree some rules, such as when to apply for X lock or S lock, hold the lock time, when release and so on. We call these rules the blockade protocol (locking Protocol). The different rules governing the blockade form a variety of blockade agreements. The level three blockade protocol is described below. The three-level blockade protocol solves the inconsistency problem of missing modification, unreadable reading and reading "dirty" data in varying degrees, and provides certain guarantee for the correct dispatch of concurrent operation. The following only gives the definition of three-level blockade protocol, no longer do too much discussion.

    • Level 1 Blockade Protocol

The 1-level lockdown protocol is: transaction T must be X-locked before it can be modified until the end of the transaction is released. The end of a transaction includes a normal end (COMMIT) and an abnormal end (ROLLBACK). The Level 1 Lockdown protocol protects against loss of modification and guarantees that the transaction T is recoverable. In the 1-level blockade protocol, if only read data is not modified, it does not need to be locked, so it does not guarantee repeatable read and not read "dirty" data.

    • Level 2 Blockade Protocol

The Level 2 Blockade protocol is: the Level 1 blockade protocol plus transaction T must be added s lock before reading data R, then release s lock after reading. The 2-level blockade protocol, in addition to preventing loss of modification, can further prevent the reading of "dirty" data.

    • Level 3 Blockade Protocol

The Level 3 blockade protocol is: the Level 1 blockade protocol plus transaction T must be added s lock to the data R before it is released until the end of the transaction. The 3-level blockade protocol, in addition to preventing the loss of modifications and not reading ' dirty ' data, further prevents non-repeatable reads.

2.3 Transaction ISOLATION LEVEL

Although database theory provides a perfect solution for concurrency consistency problems, it is obviously difficult for programmers to control the timing of Hega and lock and unlocking. The vast majority of databases and development tools provide transactional isolation levels that allow users to deal with concurrency consistency issues in a more relaxed manner. Common transaction isolation levels include: ReadUncommitted, ReadCommitted, RepeatableRead, and serializable four kinds. The way in which the database is accessed under different isolation levels and the results of the return of the database may be different. We'll drill down through several experiments to see how the Transaction Isolation level and how SQL Server converts them to locks in the background.

2.3.1 ReadUncommitted and ReadCommitted

ReadUncommitted is the lowest level of isolation, and this level of isolation allows you to read dirty data that has not yet been submitted by others, and there is a problem with not being repeatable at this transaction isolation level.

ReadCommitted is the default level for many databases, and there is no problem reading uncommitted data at this isolation level, but there is still no way to avoid the problem of non repeatable reads (including Phantom reads). This default isolation level may not provide data-efficient protection when your system's requirements for concurrency control are very stringent, but this isolation level is sufficient for most applications.

We use the following experiment to test:

First configure the SQL Server 2000 database and attach the Dbapp database. Then set up a management console application in Visual Studio. NET to add the necessary namespace references:

Using system;using system.data;using system.data.sqlclient;using system.configuration;

The two database links are then established and separate transaction isolation levels are used:

   private static SqlConnection conn1;private static SqlConnection conn2;private static sqltransaction tx1;private static Sq Ltransaction tx2;private static void Setup () {conn1 = new SqlConnection (connectionString); Open (); tx1 = Conn1. BeginTransaction (isolationlevel.readuncommitted); conn2 = new SqlConnection (connectionString); conn2. Open (); tx2 = Conn2. BeginTransaction (isolationlevel.readcommitted);}

Where transaction 1 allows reading of uncommitted data, and transaction 2 allows only read committed data.

In the main program, we simulate the different operations of two people to produce concurrency consistency problems:

   public static void Main () {Setup (); Try{readuncommitteddatabytransaction1 (); UnCommittedUpdateByTransaction2 (); ReadUnCommittedDataByTransaction1 (); tx2. Rollback (); Console.WriteLine ("\n--Transaction 2 rollbacked!\n"); ReadUnCommittedDataByTransaction1 (); tx1. Rollback ();} Catch{...}

The first step is to use the ReadUnCommittedDataByTransaction1 method to read student information from the database with an ID value of 1, using Transaction 1. The information at this time is the initial information for the database.

In the second step, the UnCommittedUpdateByTransaction2 method is invoked, and an update command is sent from the 2nd transaction to update the database, but has not yet been committed.

Step three, call ReadUnCommittedDataByTransaction1 again and read the database data from transaction 1, and you will find that the uncommitted updates released by transaction 2 are read by transaction 1 (readuncommitted).

In the fourth step, transaction 2 discards the commit and rolls back the transaction tx2. Rollback ();.

Fifth Step, call ReadUnCommittedDataByTransaction1 () again, read the data in the database, this is the data that has been rolled back.

The results of the program operation are as follows:

--Read age from database:age:20--Run a uncommitted command:update student SET age=30 WHERE id=1--Read age from Databas e:age:30--Transaction 2 rollbacked!--Read age from Database:age:20

The method for ReadUnCommittedDataByTransaction1 () and UnCommittedUpdateByTransaction2 () is defined as follows:

   private static void UnCommittedUpdateByTransaction2 () {string command = "UPDATE student SET age=30 WHERE id=1"; Console.WriteLine ("\n--Run an uncommitted command:\n{0}\n", command); SqlCommand cmd = new SqlCommand (command, conn2), CMD. Transaction = Tx2;cmd. ExecuteNonQuery ();} private static void ReadUnCommittedDataByTransaction1 () {Console.WriteLine ("--Read age from Database:"); SqlCommand cmd = new SqlCommand ("Select age from student WHERE id = 1", conn1); Transaction = Tx1;try{int age = (int) cmd. ExecuteScalar (); Console.WriteLine ("Age:{0}", age);} catch (SqlException e) {Console.WriteLine (e.message);}}

As you can see from the experiment above, at the ReadUncommitted isolation level, the program may read uncommitted data, but this isolation level has the least lock on the database resource.

The complete code for this experiment can be found under "Samplecode\chapter 2\lab 2-6".

Let's do an experiment (this experiment requires a quick action, or you may not see the desired effect). First modify the Setup () method code in the above code to

tx1 = conn1.BeginTransaction(IsolationLevel.ReadUncommitted);

To

tx1 = conn1.BeginTransaction(IsolationLevel.ReadCommitted);

Run the code again, and you'll notice that the program is not moving until the third step, and if you have enough patience to wait, you'll see that the timeout has expired. The timeout expired before the operation was completed or the server was not responding. "A hint, what exactly does this hint mean?" Let's explore exactly what happened:

The first step is to open the SQL Server 2000 Enterprise Manager before you do this experiment, and then open the SQL Server Event Explorer and be in an exploratory state.

The second step, run the modified program, the program execution to a halt in half. Quickly switch to the Enterprise Manager interface, right-click "Current Activity" under "Administration", select "Refresh" (the entire process should be completed in approximately 15 seconds, as shown in Figure 2-8), we get a snapshot of the current process of the database.

Figure 2-8 using Enterprise Manager to view current activities

We found that the process was blocked at this time, and the blocked person was the 52nd process, while the blocking was the 53rd process. That is to say, the work of process 53rd has hampered the continuation of process 52nd. (process numbers may vary in different experiments)

Third, to further pinpoint the truth of the cause, we switch to the Event Explorer window to see what the two processes are all about. As shown in Figure 2-9, the event Explorer shows the details of the two processes. From the diagram we can see that the 52nd process corresponds to our transaction 1, 53rd process corresponding to our transaction 2. Transaction 2 executed the update command, but has not yet been committed, at which point the transaction 1 is blocked from reading the data that has not been submitted. We can see from the diagram that process 52nd is blocked.

At this point, if transaction 2 completes the submission, the 52nd process can stop waiting and get the desired results. However, our program does not submit data, so the 52nd process will have to wait indefinitely. Fortunately, SQL Server 2000 detects that transaction 2 is running too long (this is the error prompt above). The timeout period has expired. The timeout expired before the operation was completed or the server was not responding. "), the transaction 2 is rolled back to release the resource that was occupied. After the release of resources, process 52nd was implemented.

Figure 2-9 Event Explorer Explorer blocking command

The fourth step, after understanding what happened above, we can now delve into the use of shared locks and exclusive locks. Back to the Enterprise Manager interface, let's look at what resources each of the two processes are consuming. As we can see from figure 2-10, the 53rd process (transaction 2) adds an exclusive lock (x Lock) to the corresponding key before the update command is executed, and the exclusive lock is released only after transaction 2 commits or rolls back, as mentioned above in the 1-level lockdown protocol. Now the 52nd process (transaction 1) is going to read the same row of data, in accordance with the Level 2 blockade protocol, it first adds a shared lock to the row, but the row data has been added to the exclusive lock of transaction 2, so transaction 1 can only be in a waiting state, waiting for the exclusive lock to be freed. So we see the "blocking" problem ahead.

Figure 2-10 before the process performs a write operation, it first adds an exclusive lock

Figure 2-11 Add a shared lock before the process reads, but is blocked

When the transaction isolation level of transaction 1 is readuncommitted, the read data is unlocked, so the exclusive lock does not work for readuncommitted and the process is not blocked, but reads "dirty" data.

2.3.2 RepeatableRead

RepeatableRead refers to repeatable reads, which have a higher isolation level than readcommitted. It allows a transaction to perform repetitive reads while data remains unchanged, but it still does not solve the phantom read problem. To get a better understanding of the problems that repeatableread can solve, we use the following experiment to prove it:

In the first step, transaction 1 is set to readcommitted at the same time as transaction 2, and transactions are opened at the same time.

private static void Setup () {conn1 = new SqlConnection (connectionString); conn1. Open (); tx1 = Conn1. BeginTransaction (isolationlevel.readcommitted); conn2 = new SqlConnection (connectionString); conn2. Open (); tx2 = Conn2. BeginTransaction (isolationlevel.readcommitted);}

In the second step, transaction 1 reads the data in the database. Note that transaction 1 is not ended by committing or rolling back at this time, and transaction 1 is still active.

private static int ReadAgeByTransaction1 () {return (int) executescalar ("Select Age from student WHERE (id = 1)");} Private static Object ExecuteScalar (String command) {Console.WriteLine ("--Execute command: {0}", command); SqlCommand cmd = new SqlCommand (command, conn1), CMD. Transaction = Tx1;return cmd. ExecuteScalar ();}

In the third step, transaction 2 modifies age data and submits revisions.

private static void ModifyAgeByTransaction2 () {string command = "UPDATE student SET age=30 WHERE id=1"; Console.WriteLine ("--Modify Age by Transaction2, command:{0}", command); SqlCommand cmd = new SqlCommand (command, conn2), CMD. Transaction = Tx2;try{cmd. ExecuteNonQuery (); Tx2.commit ();} catch (Exception e) {Console.WriteLine (e.message); tx2. Rollback ();}

The fourth step, transaction 1 read the age data repeatedly, it will be found that the data read is modified data, and the last read the same data! As the name suggests, cannot be read repeatedly. The main program code is as follows:

public static void Main () {Setup (); Try{int age1 = ReadAgeByTransaction1 (); ModifyAgeByTransaction2 (); int age2 = ReadAgeByTransaction1 (); Console.WriteLine ("\nfirst Read:age={0}\nsecond read:age={1}", Age1, Age2); catch (Exception e) {Console.WriteLine ("Got an error!" + e.message);} Finally{cleanup ();}

The results of the program's operation are as follows:

--Execute command:select age from student WHERE (id = 1)--Modify Age by Transaction2, command:update student SET age=30 where id=1--Execute command:select age from student where (id = 1) A-Read:age=20second read:age=30

The data read while repeating is not the same as the first read, because transaction 1 is set to the ReadCommitted isolation type, and the isolation level does not prevent a problem that cannot be read again. You must use the RepeatableRead transaction isolation level if you want to read data two times in a transaction exactly the same.

Let's modify the code in the Setup () method above to set the isolation level of transaction 1 to RepeatableRead:

TX1 = conn1. BeginTransaction (Isolationlevel.repeatableread);

Run the program again, and you'll notice that the program is paused in the second step, and if you wait a while you'll see that the timeout has expired. The timeout expired before the operation was completed or the server was not responding. "Error prompt, this time, the repeated reading of the data is exactly the same as the first time read. The results of the procedure are as follows:

--Execute command:select age from student WHERE (id = 1)--Modify Age by Transaction2, command:update student SET age=30 The Where id=1 timeout time has expired. The timeout expired before the operation was completed or the server was not responding. --Execute command:select age from student WHERE (id = 1)-Read:age=20second read:age=20

To find out why, as in the previous case, execute the program again, quickly switch to the Enterprise Manager to view the current active snapshot when a pause occurs, and check for data locking in the blocking process, and you will see the following in Figure 2-12 and figure 2-13:

Figure 2-12 RepeatableRead to read the data with S lock until the end of the transaction is released

Figure 2-13 Modifying the data requires an X lock but is blocked

According to the Level 3 lockdown protocol, transaction T must first add s lock to the data before it is released until the end of the transaction. Therefore, transaction 1 adds a shared lock to the data the first time it is read. The transaction does not end after the first data read is complete, so the shared lock is not released, and transaction 2 attempts to modify the data, and then, in accordance with the Level 2 Lockdown protocol, the lock is added before writing, but the shared lock on the data has not been released. Causes transaction 2 to be in a waiting state. When the transaction 2 wait time expires, SQL Server forces the transaction to be rolled back. Although transaction 2 failed to execute, transaction 1 was guaranteed to enable transaction isolation at a repeatable read level.

The RepeatableRead transaction isolation level allows for duplicate read operations within transactions, but this does not prevent phantom-reading problems, and if there are potential problems with Phantom reading in your program, you must take the highest transaction isolation level: Serializable.

2.3.3 Serializable

The Serializable isolation level is the highest transaction isolation level, and there is no problem with read dirty data, non-repeatable reads, and Phantom reads at this isolation level. First let's look at what Phantom reads before detailing why.

Phantom reading means: Transaction 1 after reading certain data records from the database in a certain condition, transaction 2 inserts a number of new records that conform to the transaction 1 search criteria, and when transaction 1 reads the data again on the same condition, more records are found. Let's reproduce the Phantom reading problem by using the following examples:

The first step is to set transaction 1 and transaction 2 to RepeatableRead isolation levels and to open transactions at the same time.

private static void Setup () {conn1 = new SqlConnection (connectionString); conn1. Open (); tx1 = Conn1. BeginTransaction (isolationlevel.repeatableread); conn2 = new SqlConnection (connectionString); conn2. Open (); tx2 = Conn2. BeginTransaction (Isolationlevel.repeatableread);}

In the second step, transaction 1 reads the average score of the student with the number 1 and the number of doors in the course studied. At this time, read 1 students to learn 3 courses, the average score is 73.67. Note that transaction 1 is not committed at this time.

private static Double ReadAverageMarksByTransaction1 () {return (double) executescalar ("Select AVG." As Avgmark from SC WHERE (id = 1) ");} private static int ReadTotalCoursesByTransaction1 () {return (int) executescalar (' Select COUNT (*) as Num from SC WHERE (id = 1) "); Private static Object ExecuteScalar (String command) {Console.WriteLine ("--Execute command: {0}", command); SqlCommand cmd = new SqlCommand (command, conn1), CMD. Transaction = Tx1;return cmd. ExecuteScalar ();}

In the third step, transaction 2 inserts a new record into the database, allowing the student number 1 to learn another 1 courses, with a score of 80. Then commit the modifications to the database.

private static void InsertRecordByTransaction2 () {string command = INSERT into SC VALUES (1, 5, 80); Console.WriteLine ("--Insert to table SC by transaction 2"); Console.WriteLine ("--command:{0}\n", Command); SqlCommand cmd = new SqlCommand (command, conn2), CMD. Transaction = Tx2;try{cmd. ExecuteNonQuery (); Tx2.commit ();} catch (Exception e) {Console.WriteLine (e.message); tx2. Rollback ();}

In the fourth step, transaction 1 reads again the average score of the student with the number 1 and the number of doors of the course being studied. At this point, 4 courses were read, with an average score of 75.25. Not the same as the first read! There is one more course, and the extra course is like a mirage out there in front of us. The main program for testing is as follows:

public static void Main () {Setup (), Try{console.writeline (">>>> Step 1");d ouble avg = ReadAverageMarksByTransaction1 (); int total = ReadTotalCoursesByTransaction1 (); Console.WriteLine ("Avg={0,5:f2}, total={1}\n", AVG, total); Console.WriteLine (">>>> Step 2"); InsertRecordByTransaction2 (); Console.WriteLine (">>>> Step 3"); avg = ReadAverageMarksByTransaction1 (); total = ReadTotalCoursesByTransaction1 (); Console.WriteLine ("Avg={0,5:f2}, total={1}\n", AVG, total);} catch (Exception e) {Console.WriteLine ("Got an error!" + e.message);} Finally{cleanup ();}

The results of the procedure are as follows:

>>>> Step 1--Execute Command:select AVG (Mark) as Avgmark from SC WHERE (id = 1)--Execute Command:select CO UNT (*) as Num from SC WHERE (id = 1) avg=73.67, total=3>>>> step 2--Insert to table SC by transaction 2-- And:insert into SC VALUES (1, 5, m) >>>> step 3--Execute command:select AVG (Mark) as Avgmark from SC WHERE (i D = 1)--Execute command:select COUNT (*) as Num from SC WHERE (id = 1) avg=75.25, total=4

You can think about why RepeatableRead isolation mode does not make the same average of two reads? (This phenomenon can be explained from the angle of the lock).

Still, as we've done before, let's see what's going on. Before you explore, set the isolation level of transaction 1 in the Setup method to serializable, run the program again, and when the discovery program is paused, view the current snapshot of the database activity, and you will find the locking problem as shown in Figure 2-14 and figure 2-15:

Figure 2-14 Serializable isolation mode adds ranges-s locks to data that matches the search criteria

Figure 2-15 When you try to insert a record that conforms to the Rangein condition, you can only be in a wait state

As we can see from the diagram, in Serializalbe isolation mode, the database adds ranges-s shared locks to all records that satisfy the retrieval criteria when retrieving data. Transaction 2 When you try to insert a record that satisfies the Rangein criteria, you must wait for these rangs-s locks to be released, otherwise you can only be in a wait state. After the wait timeout, transaction 2 is forcibly rolled back by SQL Server.

The results of the modified program run as follows:

>>>> Step 1--Execute Command:select AVG (Mark) as Avgmark from SC WHERE (id = 1)--Execute Command:select CO UNT (*) as Num from SC WHERE (id = 1) avg=73.67, total=3>>>> step 2--Insert to table SC by transaction 2-- And:insert into SC VALUES (1, 5, 80) The timeout period has expired. The timeout expired before the operation was completed or the server was not responding. >>>> Step 3--Execute Command:select AVG (Mark) as Avgmark from SC WHERE (id = 1)--Execute Command:select CO UNT (*) as Num from SC WHERE (id = 1) avg=73.67, total=3

The run failure of transaction 2 ensured that the problem of Phantom reads does not occur with transaction 1. It should be noted here that the 1, 2, 3 blockade protocol can not guarantee an effective solution to the problem of phantom reading.

2.3 Recommendations

With a few examples above, we have a better understanding of what the database is doing when it comes to solving concurrency problems. The locking mechanism belongs to the lowest guarantee mechanism, but it is difficult to use directly. We can use the locking mechanism indirectly to ensure the complete consistency of our data through different transaction isolation modes. When using different levels of isolation mode, we should also be aware of the following issues:

    • In general, the ReadCommitted isolation level is sufficient. Excessive isolation levels will lock in too many resources and affect the efficiency of data sharing.
    • The isolation level you choose depends on your system and business logic.
    • Try to avoid using locks directly unless you have a last resort.
    • We can implement different update policies by controlling the fields in the where phrase to prevent loss of modification problems. However, unnecessary update policies may result in inefficient execution of SQL commands. So be careful to use timestamps and too many protection fields as an update basis.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.