Switch,. net transaction

Source: Internet
Author: User
C # the principles and practices of database transactions (below) are full of confidence that new users may be excited by some of their knowledge, as do prospective developers who are new to database transaction processing, he is eager to apply the transaction mechanism to his Data Processing Program In each module. Indeed, the transaction mechanism looks so attractive-concise, wonderful, and practical, of course I want to use it to avoid all possible errors-I even want to use transactions to wrap up my data operations from start to end.

Let's take a look. Next I will start from creating a database:

Using system;
Using system. Data;
Using system. Data. sqlclient;

Namespace aspcn
{
Public class dbtran
{
File: // execute Transaction Processing
Public void dotran ()
{
File: // create a connection and open it
Sqlconnection myconn = getconn ();
Myconn. open ();

Sqlcommand mycomm = new sqlcommand ();
Sqltransaction mytran;

Mytran = myconn. begintransaction ();

File: // bind the following connection and transaction object
Mycomm. Connection = myconn;
Mycomm. Transaction = mytran;

File: // try to create a database testdb
Mycomm. commandtext = "create database testdb ";
Mycomm. executenonquery ();

File: // submit the transaction
Mytran. Commit ();
}

File: // get data connection
Private sqlconnection getconn ()
{
String strsql = "Data Source = localhost; Integrated Security = sspi; user id = sa; Password = ";
Sqlconnection myconn = new sqlconnection (strsql );
Return myconn;
}
}

Public class test
{
Public static void main ()
{
Dbtran trantest = new dbtran ();
Trantest. dotran ();
Console. writeline ("transaction processing has been completed successfully. ");
Console. Readline ();
}
}
}

//---------------

Unprocessed exception: system. Data. sqlclient. sqlexception: the create database statement is not allowed in Multi-statement transactions.

At system. Data. sqlclient. sqlcommand. executenonquery ()
At aspcn. dbtran. dotran ()
At aspcn. Test. Main ()

Note: The following SQL statements cannot appear in transactions:

Alter Database Modify Database
Backup log Backup log
Create Database Create a database
Disk init Create a database or Transaction Log Device
DROP DATABASE Delete Database
Dump transaction Dumping transaction logs
Load Database Load database backup copies
Load transaction Load transaction log backup copies
Reconfigure Update the current configuration (config_value column in The sp_configure result set) value of the configuration option changed using the sp_configure system stored procedure.
Restore database Restore the Database Backup Using the BACKUP command
Restore log Restore the log backup using the BACKUP command
Update statistics Updates the key value distribution information of one or more statistical groups (sets) in the specified table or index view.

In addition to these statements, you can use any valid SQL statements in your database transactions.
One of the four features of transaction rollback is atomicity. It means that a transaction composed of a specific operation sequence is either completely completed or not done. If an unknown Unexpected error occurs during transaction processing, how can we ensure the atomicity of the transaction? When a transaction is aborted, you must perform a rollback operation to eliminate the impact of the executed operation on the database.

In general, it is better to use rollback for exception handling. Previously, we have obtained a database update program and verified its correctness. Modify it a little bit and we can get:

// Rollback. CS
Using system;
Using system. Data;
Using system. Data. sqlclient;

Namespace aspcn
{
Public class dbtran
{
File: // execute Transaction Processing
Public void dotran ()
{
File: // create a connection and open it
Sqlconnection myconn = getconn ();
Myconn. open ();

Sqlcommand mycomm = new sqlcommand ();
Sqltransaction mytran;

File: // create a transaction
Mytran = myconn. begintransaction ();
File: // from then on, data operations based on the connection are considered as part of the transaction.
File: // bind the following connection and transaction object
Mycomm. Connection = myconn;
Mycomm. Transaction = mytran;

Try
{
File: // locate the pubs Database
Mycomm. commandtext = "use pubs ";
Mycomm. executenonquery ();

Mycomm. commandtext = "Update roysched set royalty = royalty * 1.10 where title_id like 'pc % '";
Mycomm. executenonquery ();

File: // The following statements used to create a database are used to create an error.
Mycomm. commandtext = "create database testdb ";
Mycomm. executenonquery ();

Mycomm. commandtext = "Update maid set maid = maid * 1.20 where title_id like 'ps % '";
Mycomm. executenonquery ();

File: // submit the transaction
Mytran. Commit ();
}
Catch (exception ERR)
{
Mytran. rollback ();
Console. Write ("transaction operation error, rolled back. System Information: "+ err. Message );
}
}

File: // get data connection
Private sqlconnection getconn ()
{
String strsql = "Data Source = localhost; Integrated Security = sspi; user id = sa; Password = ";
Sqlconnection myconn = new sqlconnection (strsql );
Return myconn;
}
}
Public class test
{
Public static void main ()
{
Dbtran trantest = new dbtran ();
Trantest. dotran ();
Console. writeline ("transaction processing has been completed successfully. ");
Console. Readline ();
}
}
}

First, we create an error in the middle-use the create database statement described earlier. Then, the Catch Block for exception handling contains the following statement:

Mytran. rollback ();

When an exception occurs, the program executes the stream to jump to the catch block. The first statement is this statement, which rolls back the current transaction. In this section, we can see that before the create database, there is already an operation to update the database -- the value of the royalty field of all books whose title_id field starts with "PC" in the royal sched table of the pubs database is increased by 0.1 times. However, the rollback caused by exceptions does not happen to the database. The rollback () method maintains the consistency of the data base and the atomicity of transactions.

The use of storage point transactions is only the worst-case safeguard. In fact, at ordinary times, the operating reliability of the system is quite high, and errors rarely occur. Therefore, it is too costly to check the validity of each transaction before it is executed-this time-consuming check is unnecessary in most cases. We have to find another way to improve efficiency.

Transaction storage points provide a mechanism for rolling back some transactions. Therefore, we do not have to check the validity of the update before the update. Instead, we can preset a storage point. After the update, if there is no error, it will continue to be executed, otherwise, roll back to the storage point before the update. This is the role of a storage point. It should be noted that the update and rollback costs are very high, only when the error is very unlikely and the cost of checking the update validity in advance is relatively high, the storage point is very effective.

When programming using the. NET Framework, you can easily define transaction storage points and roll back to specific storage points. The following statement defines a storage point "noupdate ":

Mytran. Save ("noupdate ");

When you create a storage point with the same name in the program, the newly created storage point replaces the original storage point.

When you roll back a transaction, you only need to use an overload function of the rollback () method:

Mytran. rollback ("noupdate ");

The following section describes how and when to roll back to a storage point:

Using system;
Using system. Data;
Using system. Data. sqlclient;

Namespace aspcn
{
Public class dbtran
{
File: // execute Transaction Processing
Public void dotran ()
{
File: // create a connection and open it
Sqlconnection myconn = getconn ();
Myconn. open ();

Sqlcommand mycomm = new sqlcommand ();
Sqltransaction mytran;

File: // create a transaction
Mytran = myconn. begintransaction ();
File: // from then on, data operations based on the connection are considered as part of the transaction.
File: // bind the following connection and transaction object
Mycomm. Connection = myconn;
Mycomm. Transaction = mytran;

Try
{
Mycomm. commandtext = "use pubs ";
Mycomm. executenonquery ();

Mytran. Save ("noupdate ");

Mycomm. commandtext = "Update roysched set royalty = royalty * 1.10 where title_id like 'pc % '";
Mycomm. executenonquery ();

File: // submit the transaction
Mytran. Commit ();
}
Catch (exception ERR)
{
File: // update error. rollback to the specified storage point
Mytran. rollback ("noupdate ");
Throw new applicationexception ("transaction operation error, system information:" + err. Message );
}
}

File: // get data connection
Private sqlconnection getconn ()
{
String strsql = "Data Source = localhost; Integrated Security = sspi; user id = sa; Password = ";
Sqlconnection myconn = new sqlconnection (strsql );
Return myconn;
}
}
Public class test
{
Public static void main ()
{
Dbtran trantest = new dbtran ();
Trantest. dotran ();
Console. writeline ("transaction processing has been completed successfully. ");
Console. Readline ();
}
}
}

Obviously, in this program, the chance of an update being invalid is very small, and the cost of verifying its validity before the update is quite high, so we do not need to verify its validity before the update, it combines the transaction storage point mechanism to ensure data integrity.

The concept of isolation level enterprise-level databases can cope with thousands of concurrent accesses every second, resulting in concurrency control problems. According to the database theory, due to concurrent access, the following unexpected problems may occur at unpredictable times:

Dirty read: Read containing uncommitted data. For example, transaction 1 changes a row. Transaction 2 reads changed rows before transaction 1 commits changes. If transaction 1 rolls back the change, transaction 2 reads the rows that have never existed logically.

Repeatable read: when a transaction reads the same row more than once, and a separate transaction modifies the row between two (or multiple) reads, because the row is modified between multiple reads in the same transaction, different values are generated for each read, causing inconsistency.

PHANTOM: Insert a new row or delete an existing row in the range of rows read by another task that has not committed its transaction. Tasks with uncommitted transactions cannot repeat their original reads due to changes to the number of rows in the range.

As you think, the root cause of these situations is that there is no mechanism to avoid cross-access during concurrent access. Isolation-level settings are designed to avoid these situations. The level at which the transaction quasi-standby accepts inconsistent data is called the isolation level. The isolation level is the degree to which a transaction must be isolated from other transactions. Low isolation levels can increase concurrency, but the cost is to reduce data correctness. On the contrary, high isolation levels can ensure data correctness, but may have a negative impact on concurrency.

Depending on the isolation level, DBMS provides different mutex guarantees for parallel access. The SQL Server database provides four isolation levels: uncommitted read, committed read, Repeatable read, and serializable read. These four isolation levels can ensure the concurrency data integrity to varying degrees:

Isolation level Dirty read Cannot be read repeatedly Phantom
Uncommitted read Yes Yes Yes
Submit read No Yes Yes
Repeatable read No No Yes
Serializable read No No No

It can be seen that "serializable read" provides the highest level of isolation, and the execution result of concurrent transactions will be exactly the same as that of serial execution. As mentioned above, the highest level of isolation means the lowest level of concurrency. Therefore, at this isolation level, the database service efficiency is actually relatively low. Although serializability is important for transactions to ensure the correctness of data in the database for all time, many transactions do not always require full isolation. For example, multiple authors work in different chapters of the same book. New chapters can be submitted to the project at any time. However, the author cannot make any changes to an edited chapter without the approval of the editor. In this way, despite the existence of unedited new chapters, the editors can still ensure the correctness of the book project at any time. Editors can view previously edited chapters and the most recently submitted chapters. In this way, several other isolation levels also have their meanings.

In the. NET Framework, the transaction isolation level is defined by the enumeration system. Data. isolationlevel:

[Flags]
[Serializable]
Public Enum isolationlevel

Its members and meanings are as follows:

You cannot rewrite the pending changes in a transaction with a higher isolation level. Keep the shared lock when reading data to avoid dirty reading, but you can change the data before the transaction ends, resulting in non-repeated reading or phantom data. Dirty reads are allowed, meaning that shared locks are not published or exclusive locks are not accepted. Lock all data used in the query to prevent other users from updating the data. Prevents repeated reads, but still supports phantom rows. Place a range lock on the dataset to prevent other users from updating rows or inserting rows into the dataset before the transaction is completed. The specified isolation level is being used but cannot be determined.

Obviously, the four isolation levels of the database are mapped here.

By default, SQL server uses readcommitted (committed read) isolation level.

The last point about the isolation level is that if you change the isolation level during transaction execution, the subsequent names are all executed at the latest isolation level-the change at the isolation level takes effect immediately. With this, you can use the isolation level more flexibly in your transactions to achieve higher efficiency and concurrency security.

The last piece of advice is undoubtedly that introducing transaction processing is a good way to handle possible data errors, however, we should also see the huge cost of transaction processing-the CPU time and storage space required for storage points, rollback, and concurrency control.

The content in this article is only for the Microsoft SQL Server database, corresponding.. NET Framework. data. sqlclient namespace. The specific implementation of oledb is slightly different, but this is not the content of this article. If you are interested, you can go. net (www.aspcn.com.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.