Data | database
Another error that goes to extremes
Confident novices may revel in some of the knowledge they have mastered, and the would-be developers who have just contacted database transaction processing are also complacent about applying the transaction mechanism to every single module of his data handler. Indeed, the transaction mechanism seems so tempting--simple, wonderful and practical--and I certainly want to use it to avoid all possible mistakes--and I even want to wrap my data operations from start to finish with my business.
Look, I'm going to start with creating a database:
Using System; Using System.Data; Using System.Data.SqlClient;
Namespace ASPCN { public class DbTran { file://Perform transaction processing public void Dotran () { FILE://establishes the connection and opens the SqlConnection Myconn=getconn (); MyConn.Open ();
SqlCommand mycomm=new SqlCommand (); SqlTransaction Mytran;
Mytran=myconn.begintransaction ();
file://the following binding connection and transaction objects Mycomm.connection=myconn; Mycomm.transaction=mytran;
FILE://attempted to create a database TestDB mycomm.commandtext= "CREATE database TestDB"; Mycomm.executenonquery ();
file://COMMIT Transaction Mytran.commit (); }
file://Get Data connections Private SqlConnection Getconn () { String strsql= "Data source=localhost;integrated security=sspi;user id=sa;password="; SqlConnection myconn=new SqlConnection (strSQL); return myconn; } }
public class Test { public static void Main () { DbTran trantest=new DbTran (); Trantest.dotran (); Console.WriteLine (transaction processing completed successfully.) "); Console.ReadLine (); } } }
//--------------- |
Unhandled exception: System.Data.SqlClient.SqlException: CREATE DATABASE statement is not allowed within a multiple-statement transaction.
At System.Data.SqlClient.SqlCommand.ExecuteNonQuery () At Aspcn.DbTran.DoTran () At Aspcn.Test.Main () |
Note that SQL statements such as the following are not allowed to appear in a transaction:
ALTER DATABASE |
Modify Database |
BACKUP LOG |
Backup Log |
CREATE DATABASE |
Creating a Database |
DISK INIT |
Create a database or transaction log device |
DROP DATABASE |
Delete Database |
DUMP TRANSACTION |
Dump TRANSACTION log |
LOAD DATABASE |
Mount a copy of a database backup |
LOAD TRANSACTION |
Mount a copy of the transaction log backup |
Reconfigure |
Updates the current configuration of the configuration option that uses the sp_configure system stored procedure (the Config_value column in the sp_configure result set) value. |
RESTORE DATABASE |
Restore database backups made using the backup command |
RESTORE LOG |
Restore log backups made using the backup command |
UPDATE STATISTICS |
Updates information about one or more statistical groups (collections) that have key values distributed in the specified table or indexed view |
In addition to these statements, you can use any legitimate SQL statement in your database transaction.
Transaction rollback
One of the four characteristics of a transaction is atomicity, meaning that a transaction consisting of a particular sequence of operations is either completed or not done. How do you guarantee the atomicity of a transaction if an unexpected and unexpected error occurs during transaction processing? When a transaction is aborted, a rollback operation must be performed in order to eliminate the impact of the operation already performed on the database.
In general, it is a good idea to use rollback action in exception handling. Before, we have a program to update the database, and verify its correctness, slightly modified, you can get:
RollBack.cs Using System; Using System.Data; Using System.Data.SqlClient;
Namespace ASPCN { public class DbTran { file://Perform transaction processing public void Dotran () { FILE://establishes the connection and opens the SqlConnection Myconn=getconn (); MyConn.Open ();
SqlCommand mycomm=new SqlCommand (); SqlTransaction Mytran;
FILE://Create a transaction Mytran=myconn.begintransaction (); file://Since then, data operations based on the connection are considered part of the transaction file://the following binding connection and transaction objects Mycomm.connection=myconn; Mycomm.transaction=mytran;
Try { file://Navigate to the pubs database mycomm.commandtext= "use pubs"; Mycomm.executenonquery (); mycomm.commandtext= "UPDATE roysched SET royalty = royalty * 1.10 WHERE title_id like ' pc% '"; Mycomm.executenonquery ();
file://make an error using the statement that creates the database below mycomm.commandtext= "Create database TestDB"; Mycomm.executenonquery ();
mycomm.commandtext= "UPDATE roysched SET royalty = royalty * 1.20 WHERE title_id like ' ps% '"; Mycomm.executenonquery ();
file://COMMIT Transaction Mytran.commit (); } catch (Exception err) { Mytran.rollback (); Console.Write ("Transaction operation error, rolled back.") System Information: "+err." message); } }
file://Get Data connections Private SqlConnection Getconn () { String strsql= "Data source=localhost;integrated security=sspi;user id=sa;password="; SqlConnection myconn=new SqlConnection (strSQL); return myconn; } } public class Test { public static void Main () { DbTran trantest=new DbTran (); Trantest.dotran (); Console.WriteLine (transaction processing completed successfully.) "); Console.ReadLine (); } } } |
First, we artificially made a mistake in the middle--using the CREATE DATABASE statement we talked about earlier. Then, in the catch block for exception handling, you have the following statement:
Mytran.rollback ();
When an exception occurs, the program execution stream jumps to the catch block, which is the first execution of the statement, which rolls back the current transaction. As you can see in this program, there is already an operation to update the database before the create DB--0.1 times times the value of the Royalty field in the book that starts all the title_id fields in the roysched table of the pubs database with "PC". However, a rollback caused by an exception has not occurred for the database. Thus, the Rollback () method maintains the consistency of the database and the atomicity of the transaction.
Using storage points
The business is only a worst-case safeguard, in fact, the operating reliability of the system is very high, errors rarely occur, so it is too expensive to check the validity of each transaction before it is executed--most of the time this time-consuming check is unnecessary. We have to think of another way to improve efficiency.
The transaction storage point provides a mechanism for rolling back part of the transaction. Therefore, instead of checking the validity of the update before updating, we preset a storage point that, after the update, continues if there are no errors, or rolls back to the storage point before the update. This is where the storage point works. It is important to note that updates and rollbacks are costly, and that using a storage point is very effective only if the likelihood of encountering an error is small and the cost of checking the validity of the update is relatively high.
When programming with the. NET Framework, you can easily define transaction storage points and rollback to specific storage points. The following statement defines a storage point "noupdate":
Mytran.save ("Noupdate");
When you create a storage point with the same name in your program, the newly created storage point replaces the existing storage points.
When rolling back a transaction, you simply use an overloaded function of the rollback () method:
Mytran.rollback ("Noupdate");
The following procedure illustrates the method and timing of rolling back to the storage point:
Using System; Using System.Data; Using System.Data.SqlClient;
Namespace ASPCN { public class DbTran { file://Perform transaction processing public void Dotran () { FILE://establishes the connection and opens the SqlConnection Myconn=getconn (); MyConn.Open ();
SqlCommand mycomm=new SqlCommand (); SqlTransaction Mytran;
FILE://Create a transaction Mytran=myconn.begintransaction (); file://Since then, data operations based on the connection are considered part of the transaction file://the following binding connection and transaction objects Mycomm.connection=myconn; Mycomm.transaction=mytran;
Try { mycomm.commandtext= "use pubs"; Mycomm.executenonquery ();
Mytran.save ("Noupdate");
mycomm.commandtext= "UPDATE roysched SET royalty = royalty * 1.10 WHERE title_id like ' pc% '"; Mycomm.executenonquery ();
file://COMMIT Transaction Mytran.commit (); } catch (Exception err) { file://update error, rollback to specified storage point Mytran.rollback ("Noupdate"); throw new ApplicationException ("Transaction operation error, System Information:" +err. message); } }
file://Get Data connections Private SqlConnection Getconn () { String strsql= "Data source=localhost;integrated security=sspi;user id=sa;password="; SqlConnection myconn=new SqlConnection (strSQL); return myconn; } } public class Test { public static void Main () { DbTran trantest=new DbTran (); Trantest.dotran (); Console.WriteLine (transaction processing completed successfully.) "); Console.ReadLine (); } } } |
Obviously, in this program, the probability of invalid update is very small, and the cost of validating its validity before updating is quite high, so we do not need to validate its validity before updating, but combine the storage point mechanism of transaction to provide the guarantee of data integrality.
Concept of Isolation Level
Enterprise-class databases can cope with thousands of concurrent accesses per second, resulting in concurrent control problems. As the database theory shows, due to concurrent access, there are several predictable problems that can arise at unpredictable times:
Dirty reads : reads that contain uncommitted data. For example, a transaction 1 changed a row. Transaction 2 reads the changed row before transaction 1 commits the change. If transaction 1 rolls back the change, transaction 2 reads a row that has never been logically present.
non-repeatable reads : When a transaction reads the same row more than once, and a separate transaction modifies the row between two (or more) reads, because the row is modified between multiple reads within the same transaction, each read generates a different value, which raises an inconsistency.
Illusion : A task that inserts new rows or deletes existing rows in a range of rows previously read by another task that has not yet committed its transaction. A task with an uncommitted transaction cannot repeat its original read because of changes in the number of rows in the range.
As you can imagine, the root cause of these situations is that there is no mechanism to avoid cross access when concurrent access occurs. The isolation level is set up to prevent these situations from happening. The level at which a transaction prepares to accept inconsistent data is called the isolation level. The isolation level is the degree to which a transaction must be isolated from other transactions. Lower isolation levels can increase concurrency, but at the cost of reducing data correctness. Conversely, a higher isolation level ensures that data is correct, but can have a negative impact on concurrency.
Depending on the isolation level, the DBMS provides a different mutex guarantee for concurrent access. In a SQL Server database, there are four isolation levels: uncommitted read, commit read, repeatable read, serializable read. These four isolation levels guarantee concurrent data integrity to varying degrees:
isolation level |
dirty read |
non-repeatable read |
Phantom |
Read not submitted |
is the |
is |
is |
submit to read the |
" ">" "" " |
|
is |
repeatable read |
|
no |
|
serializable read |
no |
no |
|
As can be seen, "serializable" provides the highest level of isolation, at which point the execution results of concurrent transactions will be exactly the same as serial execution. As mentioned earlier, the highest level of isolation means the lowest level of concurrency, so the efficiency of the database service is actually relatively low at this isolation level. Although serializable is important for transactions to ensure that the data in the database is correct at all times, many transactions do not always require complete isolation. For example, multiple authors work in different chapters of the same book. New chapters can be submitted to the project at any time. However, for chapters that have already been edited, the author cannot make any changes to this section without the approval of the editor. In this way, although there are new chapters that are not edited, editors can still ensure the correctness of the book item at any time. Editors can view previously edited chapters and recently submitted chapters. In this way, several other isolation levels also have their meaning.
In the. NET Framework, the isolation level of a transaction is defined by the enumeration System.Data.IsolationLevel: £
[Flags] [Serializable] public enum IsolationLevel |
Its members and the corresponding meanings are as follows:
Members |
Meaning |
Chaos |
Cannot overwrite pending changes in higher isolation level transactions. |
ReadCommitted |
Keep a shared lock while reading data to avoid dirty reads, but you can change the data before the transaction ends, resulting in non repeatable read or phantom data. |
ReadUncommitted |
You can do dirty reading, meaning that you do not publish shared locks and do not accept exclusive locks. |
RepeatableRead |
Locks are placed on all data used in the query to prevent other users from updating the data. Prevents unreadable reads, but can still have phantom rows. |
Serializable |
Place a range lock on the dataset to prevent rows from being updated by other users or inserting rows into the dataset before the transaction completes. |
Unspecified |
The isolation level that is different from the specified isolation level is being used, but the level cannot be determined. |
The four isolation levels of the database are mapped here.
By default, SQL Server uses the readcommitted (commit read) isolation level.
The last thing about the isolation level is that if you change the isolation level during the execution of a transaction, subsequent names are executed at the most recent isolation level--the change in the isolation level takes effect immediately. With this, you can use isolation levels more flexibly in your transactions to achieve higher efficiency and concurrent security.
The last Advice
Undoubtedly, the introduction of transactions is a good way to deal with possible data errors, but you should also see the huge cost of transaction processing-the CPU time and storage space needed for storage points, rollback, and concurrency control.