------------------------------------------------------------------------
--Author:happyflystone
--Date:2009-09-27 21:36:30
--Version:microsoft SQL Server 2005-9.00.2047.00 (Intel X86)
-APR 14 2006 01:12:25
--Copyright (c) 1988-2005 Microsoft Corporation
--Enterprise Edition on Windows NT 5.2 (Build 3790:service Pack 2)
--Reprint Please indicate the source, more please pay attention: Http://blog.csdn.net/happyflystone
--Keyword: Isolation level lock
------------------------------------------------------------------------
Iv. Isolation Level
First of all, isolation is a matter that must be separated from the resources or data changes made by other firms, obviously the level of isolation is the degree of separation. The isolation level has to mention the concept of locks, but in this single no mention of locks, in the future to listen to the chapters to explain, everyone just have an impression on the line. Here we have to understand two things:
1, the isolation level does not affect an exclusive lock that the process obtains data modification, and the lock is saved to the end of the transaction. The isolation level, relative to the read process, is a level of protection against read operations, and programs that protect read operations from other transactions.
2, a lower isolation level enhances the ability of many users to access data at the same time, but also increases the number of concurrent side effects (such as dirty reads or missing updates) that users may experience. Conversely, a higher isolation level reduces the concurrency side effects that users may experience, but requires too much system resources and the possibility of one transaction blocking other transactions.
You should balance the application's integrity requirements with the system overhead of the corresponding isolation level, based on which you select the appropriate isolation level. The highest isolation level (serializable) guarantees that the transaction can accurately retrieve the same data each time the read operation is repeated, but it needs to do this by performing some level of locking, which may affect other user processes. The lowest isolation level (READ UNCOMMITTED) can retrieve data that has been modified but not committed by another transaction. In uncommitted reads, all concurrent side effects can occur, but the overhead is minimal because there are no read locks or modified blocking reads.
The different isolation levels determine what data side effects we can take, and the concurrency model determines how these data behaviors are constrained or coordinated in different isolation levels. Well, let's focus on how to limit the occurrence of these behaviors under different isolation levels.
UNCOMMITTED read (uncommitted read):Literally, the modified uncommitted data can be read. Accurate point: One user process can read data that another user process has modified but not committed. SQL Server reads the data at this level without having to acquire any locks, because it does not require a lock so it does not block with any other process, and it naturally reads uncommitted data that other processes have modified. Obviously this is not an ideal pattern for us, but it has high concurrency, because read operations without locks do not affect the read or write operations of other processes. At this level, in addition to the loss of updates (the behavior that may occur in the previous lecture), other behaviors are likely to occur, risking data inconsistency to avoid the process of modifying the process of blocking the read, the consistency of the transaction is certainly not guaranteed, obviously this is a negative concurrency mode of the avoidance of blocking frequently a solution. Uncommitted read that is certainly not suitable for stocks, financial systems, but in some trend analysis system, the requirement is only a trend, accuracy can not be so strict, this level due to concurrency performance is preferred.
Read Committed:As opposed to uncommitted reads, the read-committed level guarantees that one process cannot read data that is modified by another process but not committed. This level is the engine default level, is also 2005 optimistic concurrency mode support level, that is committed read but optimistic can also be pessimistic, that is the current library belongs to which concurrency model of the committed read, depending on a read_commited_snapshot database configuration item , and the default is pessimistic concurrency control. This configuration item determines whether the transaction uses locking or row versioning under the read-committed level, and it is clear that row versioning is optimistic concurrency mode, and locking is pessimistic concurrency mode. Let's take a look at the horn:
--set Read Committed isolation using row versioningALTER DATABASETestcsdnSETRead_committed_snapshot on GO--View the currently committed read isolated concurrency modelSelectname,database_id,is_read_committed_snapshot_on fromsys.databases/*name database_id is_read_committed_snapshot_on----------------------------------------------- -------------Master 1 0 tempdb 2 0 model 3 0 msdb 4 0 reportserver$sql2005 5 0 reportserver$sql 2005TempDB 6 0 testcsdn 7 1--current (7 rows affected)*/--Set Read Committed quarantine use lockALTER DATABASETestcsdnSETRead_committed_snapshotOFF GO--view committed read isolated concurrency modelSelectname,database_id,is_read_committed_snapshot_on fromsys.databases/*name database_id is_read_committed_snapshot_on----------------------------------------------- -------------Master 1 0 tempdb 2 0 model 3 0 msdb 4 0 reportserver$sql2005 5 0 reportserver$sql 2005TempDB 6 0 testcsdn 7 0--curret (7 rows affected)*/
Committed read logically guarantees that data that does not actually exist is not read. Pessimistic concurrent Read Committed, when the process to modify the data will be on the data row to request an exclusive lock, other processes (whether read or write) must wait until the exclusive lock release to use the data. If a process is only reading data using a shared lock, while other processes can read the data but cannot update the data, it must wait until the shared lock is released (the shared lock is freed when the data is processed, such as a row-sharing lock is automatically freed when the current data row data is processed, and does not persist throughout the transaction). )。 Optimistic concurrency is read-committed, and it is ensured that uncommitted data is not read, not by locking it, but by generating a row-by-row version of the data from the line versioning controller, while the modified data remains locked, but other processes can read the pre-update version data.
REPEATABLE READ (Repeatable Read):This is also a pessimistic concurrency level. Repeatable reads are more stringent than read-committed requirements, adding a limit on the read-committed basis: the acquired shared lock is persisted to the end of the transaction. Under this restriction, the process reads the data twice in a single transaction, i.e. it does not read to other processes to modify the data. Here we mention that the shared lock will remain until the end of the transaction, it must be stated that regardless of the level and concurrency model, the exclusive lock must be persisted to the end of the transaction. Shared locks at repeatable read levels are also retained until the end of the transaction. So the guarantee of data security is the cost of increasing the cost of sharing the reservation, that is, as long as the start of a transaction, the other user process is impossible to modify the data, the obvious concurrency and performance of the system will inevitably decline. This seems to be a level of our imagination, although this level is temporarily unable to avoid the Phantom read, and we also acquiesce to concurrency and performance degradation, it is only the programmer's control of the transaction has strict requirements: The transaction is short and try not to human factors, reduce potential lock competition.
Snapshot (SnapShot):Optimistic concurrency level. This is an isolation level that is 2005 new additions. The snapshot level is almost the same as a committed read that uses optimistic concurrency, the difference is how early the data version is in the row version controller, which is said later when you talk about locks. This level guarantees that the data read by a transaction is data that is logically acknowledged and consistent with the database at the beginning of the transaction. The read operation does not require a shared lock, and if the required data is already queued, the most recent consistent data is read through the row versioning controller.
Serializable :is currently the most rigorous, the most robust of a level, is pessimistic concurrency. It prevents the occurrence of phantoms and avoids all previous unexpected behavior. Serializable means that the system runs in the order in which the process enters the queue, and the results of the serialization execution are consistent with the transaction. This most robust level obviously shared locks also avoid phantom occurrences as the transaction begins to end with a transaction and locks the data that does not exist (that is, the index key range lock).
Today's task is completed, the first two I just put the theoretical things together, the next I put the unexpected behavior combined with the isolation level with the point instance to explain, and then start to organize the lock.
Please keep your eye on my blog:http://blog.csdn.net/happyflystone.
Transactions and Locks in SQL2005 (ii)-REPRINT