1, dirty read: Dirty read refers to when a transaction is accessing the data, and the data has been modified, and this modification has not been committed to the database, then another transaction also accesses the data, and then used this data.
For example:
Zhang San's salary was 5000 and transaction a changed his salary to 8000, but transaction a was not yet submitted.
Meanwhile
Transaction B is reading Zhang San's salary and reading to Zhang San's salary is 8000.
Then
Transaction A has an exception and the transaction is rolled back. Zhang San's wages were rolled back to 5000.
At last
Transaction B reads 8000 of the Zhang San payroll data as dirty data, and transaction B does a dirty read.
2, non-repeatable read: Refers to the same data within a transaction, read multiple times. When this transaction is not finished, another transaction accesses the same data. Then, between the two read data in the first transaction, the data that the first transaction two reads may be different because of the modification of the second transaction. This occurs when the data that is read two times within a transaction is not the same and is therefore called non-repeatable read.
For example:
In transaction A, the salary read to Zhang San is 5000, the operation is not completed, and the transaction is not committed.
Meanwhile
Transaction B changed the salary of Zhang San to 8000 and submitted the transaction.
Then
In transaction A, the salary of the Zhang San is read again, at which time the salary becomes 8000. The result of two reads in a transaction does not cause a non-repeatable read.
3. Phantom reads: A phenomenon that occurs when a transaction is not executed independently, for example, the first transaction modifies the data in a table, which involves all rows of data in the table. At the same time, the second transaction modifies the data in the table by inserting a new row of data into the table. Then the user who will be working on the first transaction in the future finds that there are no modified rows of data in the table, as if the illusion had occurred.
For example:
The current salary is 5000 employees have 10 people, transaction a read all wages for 5000 of the number of 10 people.
At this time
Transaction B Inserts a record with a salary of 5000.
This is, transaction a reads the employee with a payroll of 5000 again, and records 11 people. This creates a phantom read.
4. Reminders
The key to non-repeatable reading is to modify:
The same condition that you read the data, read it again and find that the value is different
The focus of Phantom reading is to add or delete:
The same conditions, the 1th and 2nd readings of the number of records are not the same
(1) from the database system point of view, the lock is divided into the following three types:
Exclusive lock (Exclusive lock)
An exclusive lock-locked resource is only allowed to be used by the program that locks the operation, and any other operations on it will not be accepted. SQL Server automatically uses exclusive locks when you execute the Data Update command, which is the INSERT, UPDATE, or delete command. However, an exclusive lock cannot be added when there are other locks on the object. Exclusive locks are released until the end of the transaction.
Shared lock
A shared lock-locked resource can be read by another user, but other users cannot modify it. When the SELECT command executes, SQL Server typically locks the object with a shared lock. The shared lock is released immediately after the data page with the shared lock is read.
Updating locks (update lock)
The update lock is created to prevent deadlocks. When SQL Server prepares to update data, it first locks the data object as an update lock, so that the data cannot be modified but can be read. When SQL Server determines that an update data operation is being made, it automatically changes the update lock to an exclusive lock. However, when there are other locks on the object, they cannot be locked for update locks.
(2) from the programmer's point of view, the lock can be divided into the following two types:
Pessimistic lock (Pessimistic lock)
Pessimistic locking, as its name implies, is a conservative attitude to the data being modified by the outside world (including other transactions currently in the system, as well as transactions from external systems), so that the data is locked during the entire data processing process. Pessimistic lock implementation, often rely on the database provided by the lock mechanism (also only the database layer provides a lock mechanism to truly guarantee the exclusivity of data access, otherwise, even in this system to implement the locking mechanism, there is no guarantee that the external system will not modify the data).
Optimistic lock (Optimistic lock)
Relative pessimistic lock, the optimistic locking mechanism adopts a more relaxed locking mechanism. Pessimistic locking relies on the lock mechanism of the database in most cases, to ensure the maximum degree of exclusivity of the operation. But it comes with a lot of overhead for database performance, especially for long transactions, which are often unsustainable.
And the optimistic locking mechanism solves this problem to some extent. Optimistic locking, mostly based on the data version (versions) recording mechanism implementation. What is the data version. is to add a version identity to the data, which is typically done by adding a "version" field to the database table in the version solution based on the database table. When the data is read, the version number is read together, and then the version number is added one after the update. At this point, the version data of the submitted data is compared to the current version information of the database table corresponding to the record, and if the submitted version number is greater than the current version number of the database table, it is updated, otherwise it is considered to be outdated data.
2. How locks are used in the database
First, start with the pessimistic lock. In many other databases such as SQL Server, data locking is usually a page-level lock, that is, the data in a table is a serialization of the update insertion mechanism, at any time the same table will only plug in 1 of data, the other want to insert the data to wait until this piece of data inserted in order to insert after. The result is a decrease in performance, when multi-user concurrent access, when a table for frequent operation, will find that the response is very inefficient, the database is often in a state of suspended animation. Oracle uses row-level locks, only locks the data that it wants to lock, and the rest of the data is irrelevant, so there's basically no impact when inserting data into an Oracle table.
Note: For pessimistic locking is more likely to be for concurrency, and generally in our application with optimistic locking enough.
Oracle's pessimistic lock requires an existing connection, divided into two ways, from the difference between SQL statements, that is, a for update, a for update nowait form.
For example, let's look at an example. First set up a database table for testing:
CREATE TABLE TEST (id,name,location,value,constraint test_pk PRIMARY KEY (ID)) as SELECT deptno, Dname, loc, 1 from SCOTT.DEP T
Here we use the Oracle's sample Scott user's table to copy the data into our test table.
(1) For update format introduction
Then we'll look at how the for update is locked. We execute the following select FOR UPDATE statement:
SELECT * FROM test where id = Ten for update
After this search statement is locked, then open another Sql*plus window to operate, and then the above SQL statement execution, you will find that Sqlplus seems to have died there, as if the data can not retrieve the appearance, but also do not return any results, it belongs to the feeling of the card there. What is the reason for this time, is that the first session in the beginning of the Select FOR UPDATE statement to lock the data. Since the locking mechanism here is the state of wait (as long as it does not indicate nowait that is wait), the current retrieval in the second session (that is, the stuck sql*plus) is in a waiting state. When the first session is last commit or rollback, the result of the second session is automatically jumped out and the data is locked.
But if you have a second session, your search statement looks like this: SELECT * FROM test where id = 10, that is, there is no statement for the lock data for update, it will not cause blocking.
(2) for UPDATE nowait format introduction
Another scenario is when the database data is locked, that is, after executing the SQL for update, what happens after we execute for update nowait in the other session.
For example, the following SQL statement:
SELECT * FROM test where id = Ten for update nowait
Since this statement is made using the NOWAIT method for retrieval, it quickly returns a ORA-00054 error when the data is found to be locked by another session, with the content being busy, but specifying the NOWAIT way to get the resource. So in the program we can use the NoWait method to quickly determine whether the current data is locked, if locked, it is necessary to take appropriate business measures to deal with.
The other problem here is, when we lock up the data, what will we do to update and delete the data?
For example, we let the first session lock the id=10 data, and we execute the following statement in the second session:
Update test set value=2 where id = 10
At this point we find the UPDATE statement as if the Select for UPDATE statement also stops here, and the update will not function until the first session is unlocked. When you update the operation, the data is locked by your update statement, this time as long as you do not have a commit after the update, the other session can not lock the data to update and so on.
In summary, the pessimistic lock in Oracle is the use of Oracle's connection to lock data. In Oracle, the performance penalty with this row-level lock is very small, just pay attention to the program logic, do not give you accidentally into a deadlock just fine. And because of the timely data locking, when the data submitted at the time of the conflict, you can save a lot of annoying data conflict processing. The downside is that you have to always have a database connection, which means that your database joins are always held while the lock is locked until the last release.
In contrast to the pessimistic lock, we have an optimistic lock. Optimistic lock at the outset, it is the assumption that there will be no data collisions at the outset, and then the data conflict detection at the time of the last commit.
In the optimistic lock, we have 3 common practices to achieve:
A. Copy the entire data into the application when the data is made, compared to the data in the current database and the data obtained prior to the beginning of the update.
When two data is found to be identical, it means that no conflict can be committed, otherwise it is a concurrency conflict that needs to be resolved with business logic.
B. Optimistic locking is done by using the version stamp, which is used in hibernate.
With a version stamp, you first need to create a new column on the database table that you have an optimistic lock on, such as number, and when your data is updated every time, the version will increase by 1.
For example, there are also 2 sessions that operate on a single piece of data. Both are taken to the current data version number is 1, when the first session of the data update, at the time of submission to view the current version of the data is also 1, and the same version that you first fetch. It is formally submitted, then the version number is increased by 1, this time the current version of the data is 2. When the second session also updated the data submitted, found that the database version is 2, and the beginning of the session to get the version number is inconsistent, know that others have updated this data, this time to do business processing, such as the entire transaction rollback and so on operations.
When using a version stamp, you can use the validation of the version stamp on the application side, or use the trigger (trigger) on the database side to authenticate. However, the trigger performance cost of the database is still relatively large, so it can be verified on the application side or not recommended trigger.
C. The third approach is somewhat similar to the second approach, with the addition of a table column, but this time the column is the timestamp type that stores the last updated data.
The new data type, the timestamp with time zone type, can be used for timestamps after oracle9i. This timestamp data precision is the highest in Oracle's time type, accurate to microseconds (not yet to the nanosecond level), in general, plus the database processing time and the person's thought action time, the microsecond level is very very enough, in fact, as long as the accuracy of milliseconds or even seconds should be no problem.
Similar to the previous version of the stamp, but also at the time of the update submission to check the current database of the time stamp and the update before the time stamp to compare, if the same is OK, otherwise the version conflict. If you do not want to write code in the program, or for other reasons can not write the code in the existing program, it is also possible to write this timestamp optimistic lock logic in the trigger or stored procedures.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.