One of the biggest challenges for developing multi-user database applications is to strive for the maximum concurrent access while ensuring that each user can read and modify data in a consistent manner. The maximum number of concurrent accesses requires a locking mechanism, while some concurrency control mechanisms are required to ensure consistent data reading and modification.
1. Concurrency Control
Concurrency control is a set of functions provided by the database. It allows multiple users to access and modify data simultaneously. Lock is one of the core mechanisms for Oracle to manage and share database resources for concurrent access and prevent "mutual interference" between concurrent database transactions. To sum up, Oracle uses multiple locks, including:
TX lock: the transaction that modifies the data will obtain this lock during execution.
Tmlock and DDL lock: When you modify the content of an object (for tmlock) or the object itself (corresponding to the DDL lock), these locks can ensure that the object structure is not modified.
Latch: this is an internal Oracle lock used to coordinate access to its shared data structure.
Oracle's support for concurrency not only uses efficient locking, but also implements a multi-version architecture, which provides a controlled but highly concurrent data access. Multi-version refers to the ability to materialized data of multiple versions at the same time, which is also a mechanism for Oracle to provide a read consistency view. Multiple versions have a good side effect, that is, the reader of data will never be blocked by the writer. In other words, writing does not block reading. This is a fundamental difference between Oracle and other databases.
By default, the read consistency multi-version view of Oracle is at the application and statement level, that is, corresponding to each query. It can also be changed to transaction-level. The basic function of transactions in a database is to change the database from one consistent state to another. The iso SQL standard specifies the transaction isolation level, which defines how sensitive a transaction is to modifications made to other transactions ". The more sensitive the database is, the more isolation the database must provide between the transactions executed by the application.
2. transaction isolation level
The ANSI/iso SQL standard defines four transaction isolation levels. For the same transaction, different isolation levels have different results. That is to say, even if the input is the same and the same method is used to accomplish the same job, different answers may be obtained, depending on the isolation level of the transaction. These isolation levels are defined based on three phenomena. The following are three phenomena that a given isolation level may allow or disallow:
A) dirty read: You can read uncommitted data, that is, dirty data. You only need to open an OS file that is being read and written by others (no matter what data is in the file) to achieve the dirty read effect. If dirty reads are allowed, data integrity is affected. In addition, foreign key constraints are damaged and uniqueness constraints are ignored.
B) nonrepeatable read: this means that this line may have been modified when the http://www.bkjia.com re-reads this line at T2. Maybe it has disappeared, it may have been updated, and so on.
The changes have been submitted, but they are different from dirty reads.
C) phantom read: This indicates that if you execute a query at T1 and then execute the query at T2, at this time, another row may have been added to the database, which affects your results. The difference with non-repeated reading is that in phantom reading, the data that has been read will not change, but more data will meet your query conditions compared with the previous one.
The SQL isolation level describes the level based on these phenomena, and does not force a specific locking mechanism or a specified behavior. This allows different locking/concurrency mechanisms to exist.
Table 1 ANSI isolation level
Isolation level dirty read cannot be repeated read phantom read
Read uncommitted allows
Read committed allows
Repeatable read allowed
SERIALIZABLE
The SQL isolation level indicates that Read Committed cannot provide consistent results, because it may generate non-repeated Read and fantasy Read. in Oracle, read Committed has the attributes required to get the consistent Read query. In addition, Oracle also adheres to the "spirit" of read uncommitted ". (Some databases) dirty reads are provided to support non-blocking reads. That is to say, queries are not blocked by updates to the same data, nor are queries blocked by updates to the same data. However, Oracle does not need dirty reading to achieve this goal, and does not support dirty reading. However, dirty reads must be implemented in other databases to provide non-blocking reads.
In addition to the four isolation levels defined by SQL, Oracle also defines another level called Read Only. The READ-ONLY transaction is a repeatable read or SERIALIZABLE transaction that cannot be modified in SQL. If the transaction uses the READ-ONLY isolation level, ONLY the modifications committed at the beginning of the transaction can be seen, but the insert, update, and delete modes are not allowed (other sessions can update data, but the read only transaction does not work ). If this mode is used, you can obtain the isolation at the repeatable read and SERIALIZABLE levels.
The following describes the isolation levels.
2.1 READ UNCOMMITTED
This isolation level allows dirty reads, But Oracle does not use dirty reads or even dirty reads. In fact, the fundamental goal of Read Uncommitted is to provide a standard-based definition to support non-blocking Read. Oracle supports non-blocking read by default. Dirty reading is not a feature, but a disadvantage. Oracle does not require dirty reading at all. Oracle can completely obtain all the advantages of dirty reading (that is, no blocking) without any incorrect results.
How is it implemented? When we query the data in a table at the beginning and modify the data, if other transactions prepare to query the data during the transaction process, oracle will use multiple versions to create a copy of the block, including the original unmodified value, so that Oracle can effectively bypass the modified data, it does not read the modified value, but re-creates the original data from the undo segment, also known as rollback. Therefore, consistent and correct answers can be returned without waiting for the transaction to be committed.
Those databases that allow dirty reads will read the modified data.
2.2 READ COMMITTED
Read committed isolation level means that a transaction can only read committed data in the database. There are no dirty reads, but there may be non-repeated reads (that is, reading the same row repeatedly in the same transaction may return different answers) and Phantom reads (compared with the early stages of the transaction, you can view submitted rows and newly inserted rows in the query ). In database applications, read committed may be the most common isolation level, which is also the default mode for Oracle databases. It is seldom seen that other isolation levels are used.
In Oracle, because multi-version and READ-only consistent queries are used, both read committed and read uncommitted are used to query the same table. Oracle will rebuild the modified data as the data at the start of the query and restore it to its "original face". Therefore, the database will return the answer at the beginning of the query.
If another database is used at the Read Committed isolation level, other users in the http://www.bkjia.com wait until the transaction is Committed if the transaction is not Committed during the query, the final result may be incorrect because it cannot be read repeatedly ).
You have created a test table t and found that when the transaction isolation level of Oracle is Read Committed, non-repeated Read occurs. The specific method is as follows (the next line is later than the last line ):
Session 1: Session 2
Create table t (x int );
Insert into t values (1 );
Insert into t values (2 );
Commit;
Delete from t where x = 2 (start transaction)
Update t set x = 10 where x = 1
Select * from t (x = 1)
Commit;
Select * from t (x = 10)
Commit;
It can be seen that the Read Commited of Oracle still returns different results. I don't know why the same result is returned in the book. Hope to answer your questions!
(The same result can only be set to SEAIALIZABLE, and it must be set at the beginning of each transaction)
2.3 REPEATABLE READ
The goal of repeatable read is to provide such an isolation level, which not only provides consistent and correct answers, but also avoids the loss of updates.
Consistent read:
If the isolation level is repeatable read, the results obtained from the given query should be consistent with those obtained from a certain time point. Most databases (excluding Oracle) use low-level shared read locks to achieve Repeatable read. The shared read lock prevents other sessions from modifying the data we have read. Of course, this will reduce concurrency. Oracle uses a multi-version model with more concurrency to provide consistent read answers.
In Oracle, by using multiple versions, the obtained results are consistent with the query start time. In other databases, by using the shared read lock, you can obtain the same answer as the time point at which the query is completed. That is to say, the query result is consistent with the time when the query result is compared to the answer we get.
However, using the shared read lock to get consistent results has one side effect: the data reader blocks the data writer. It affects concurrency. Another side effect is that data readers often encounter deadlocks with the writer.
We can see that the statement-level read consistency can be obtained in Oracle, without leading to read blocking or deadlock. Oracle never uses SHARED read locks. Oracle chose a multi-version mechanism, which is more concurrent even though it is more difficult to implement.
Missing Update:
In databases with shared READ locks, a common purpose of repeatable read is to prevent the loss of updates. If repeatable read is enabled in a database that uses a shared READ lock (instead of multiple versions), no update error will be lost. These databases do not lose updates because a lock is applied to the selected data. Once the data is read by a transaction, it cannot be modified by any other transaction. In this case, if your application thinks repeatable read means that "the loss of updates is impossible", when you port the application to a database that does not use the shared READ lock as the underlying concurrency control mechanism, it will be painful to find that it is not the same as you expected.
Although it sounds good to use the shared read lock, you must remember that if the shared read lock is applied to all data during data reading, this will definitely severely limit concurrent read and modification. Therefore, although this isolation level in these databases can prevent the loss of updates, at the same time, it also makes the ability to complete concurrent operations useless! For these databases, you cannot have both the fish and the bear's paw.