Database concurrency control technology (1-4)

Source: Internet
Author: User
A database is a shared resource that can be used by multiple users. These user programs can be executed one by one in a serial manner. At each time, only one user program runs and accesses the database. Other user programs can access the database only after the user program is completed. However, if a user program involves a large amount of data input/output exchanges, the database system remains idle for most of the time. Therefore, in order to make full use of database resources and give full play to the features of database shared resources, multiple users should be allowed to access the database in parallel. However, in this case, multiple user programs concurrently access the same data. If you do not control concurrent operations, you may access and store incorrect data, undermining Database Consistency, therefore, the database management system must provide a concurrency control mechanism. The quality of the concurrency control mechanism is one of the important indicators to measure the performance of a database management system. DM uses a blocking mechanism to solve concurrency problems. It can ensure that multiple user programs are running at any time, but all user programs are running in completely isolated environments.
I,Preparations for concurrency control
  (1) concurrency control Overview
Concurrency control is performed in units of transactions.
  1. The unit of concurrency control-transaction
A transaction is the logical unit of work of a database. It is a user-defined sequence of operations. A transaction can be a set of SQL statements, an SQL statement, or the entire program.
The start and end of a transaction can be controlled by the user. If the user does not explicitly define the transaction, the database system automatically divides the transaction according to the default rules.
Transactions should have four attributes: atomicity, consistency, isolation, and persistence.
(1) atomicity
The atomicity of the transaction ensures that a group of update operations in the transaction are atomic, that is, these operations are a whole, and all or all operations are not done for the database, and cannot be partially completed. This nature can be ensured even after the system crash, and the database will be restored after the system crash to restore and revoke the impact of active transactions in the system crash on the database, this ensures the atomicity of transactions. Before the system modifies any actual data on the disk, the information about the modification operation is recorded on the disk. When a crash occurs, the system can record the status of the transaction based on these operations to determine whether to cancel all the modifications made by the firm or re-execute the modifications.
(2) Consistency
Consistency requires that the database be changed from one consistent state to another after the transaction is executed. It is a logical attribute based on consistency rules. For example, in the transfer operation, the amount of each account must be balanced. This rule is mandatory for programmers, consistency is closely related to atomicity. The consistency attribute of transactions requires that the consistency of transactions still meet the requirements in the case of concurrent execution of transactions. It is not logically independent, and is expressed by the isolation of transactions.
(3) Isolation
Isolation means that the execution of a transaction cannot be disturbed by other transactions. That is to say, the operations and data used within a transaction are isolated from other concurrent transactions, and the transactions executed concurrently cannot interfere with each other. It requires that even if multiple transactions are executed concurrently, it seems that each successful transaction is executed according to serial scheduling. Another method of this nature is serializable. That is to say, any staggered Operation Scheduling allowed by the system is equivalent to a serial scheduling. Serial scheduling means that a transaction is scheduled each time. Before all operations of a transaction are completed, other transaction operations cannot begin. Due to performance, we need to schedule staggered operations, but we also hope that the scheduling effect of these staggered operations is consistent with that of a serial scheduling. DM implements this mechanism by adding an appropriate lock to the Data Access Object of the transaction, so as to exclude other transactions from concurrent operations on the same database object.
(4) Persistence
The persistence guarantee provided by the system requires that once a transaction is committed, the modifications made to the database will be persistent, and no matter what machine or system failure occurs, there should be no impact on it. For example, when an ATM pays a sum of money to a customer, there is no need to worry about losing the customer's withdrawal records. The persistence of transactions ensures that the impact of transactions on the database is persistent, even if the system crashes. As mentioned in atomicity, the system provides this guarantee by recording.
DM does not provide statements that explicitly define the start of a transaction. The first executable SQL statement (except the CONNECT statement) implies the start of a transaction, however, the end of a transaction can be explicitly controlled by the user. In DM, a transaction ends (normal or abnormal) in the following situations:
(1) When the attribute of a connection is set to automatic submission, each statement is submitted;
(2) When a COMMIT/ROLLBACK statement is encountered, a transaction is committed/rolled back;
(3) When the automatic DDL submission switch of the system is enabled (on by default), the DDL statement and the previous DML and DDL operations are automatically submitted in case of a DDL statement;
(4) The process of the firm ends normally and the user exits;
(5) Abnormal Termination of the system;
Note: DM provides the automatic DDL_AUTO_COMMIT switch for DDL statements in the configuration file. When the value of this configuration item is 1 (default), all DDL statements are automatically submitted. When the value of this configuration item is 0, all DDL statements except CREATEDATABASE, ALTERDATABASE, and CREATESCHEMA statements are not automatically submitted.
Consistency in DM is based on transactions. DM uses commit and rollback to save and permanently modify the database. However, whether committed or rolled back, DM ensures that the database is consistent before and after each transaction starts. To improve the flexibility of transaction management, DM provides SAVEPOINT statements and rollback to save point statements. A storage point provides a flexible rollback. During execution, a transaction can be rolled back to a storage point. operations before the storage point are valid, and subsequent operations are rolled back.
Transactions in DM also have four attributes: atomicity, consistency, isolation, and persistence.
  2. inconsistency between concurrent operations and data
If it is not locked and multiple users access a database at the same time, a problem may occur when their transactions use the same data at the same time, resulting in data inconsistency in the database.
The most common example of concurrent operations is the ticket booking operation in the train/airplane ticket booking system. For example, an activity sequence in the system:
1. The ticket clerk A reads the balance of the number of tickets on A flight, and sets it to A = 16;
2. Conductor B reads the balance of the number of tickets on the same flight, which is 16;
3, A ticket clerk sold A ticket, change the number of tickets A = A-1 = 15, write A back to the database;
4, B ticket conductor also sold A ticket, change the number of tickets A = A-1 = 15, write A back to the database.
As a result, the balance of the two tickets is reduced by 1.
This situation is called Database Inconsistency. This inconsistency is caused by the concurrent operations of ticket sellers A and B. In the case of concurrent operations, the scheduling of transaction operation sequences a and B is random. If the preceding scheduled sequence row is used, the modification of transaction a is lost. This is because transaction B modifies transaction A in step 1 and writes back the modification of transaction.
The Database Inconsistency caused by concurrent operations can be divided into four categories: loss or overwrite update, dirty read, non-repeated read, and phantom read. The above example is only one of the concurrency problems.
(1) lost update)
When two or more transactions select the same data and update the data based on the originally selected value, the update will be lost. Every transaction does not know the existence of other transactions. The last update will overwrite the updates made by other firms, which will lead to data loss. The preceding plane ticket booking example is a concurrency issue. Transaction 1 and transaction 2 successively read the same data A = 16, transaction 1 executes the A-1, and writes back result A = 15, transaction 2 executes the A-1, and writes back result A = 15. The result of transaction 2 commit overwrites the modification of transaction 1 to the database, thus the modification of transaction 1 to the database is lost.
(2) Dirty read
One Transaction reads data written by another uncommitted parallel transaction. When the second transaction selects another row being updated, unconfirmed correlation issues will occur. The data being read by the second transaction has not been confirmed and may be changed by the transaction that updates this row. In other words, when transaction 1 modifies a certain data and writes it back to the disk, transaction 2 reads the same data, and transaction 1 is canceled for some reason, at this time, when transaction 1 has modified the original data recovery value, the data read by transaction 2 is inconsistent with the data in the database. It is incorrect data, which is called dirty read.
For example, in middle, transaction 1 changes the C value to 200, transaction 2 reads C to 200, and transaction 1 cancels for some reason, the modification is voided, and C restores the original value to 100, at this time, what transaction 2 reads is the incorrect "dirty" data.
(3) nonrepeatable read)
A transaction reads the previously read data and finds that the data has been modified by another committed transaction. That is, after transaction 1 reads a certain data, transaction 2 modifies it. When transaction 1 reads the data again, it gets a different value than the first time.
For example, in transaction 1 reads B = 100 for calculation, transaction 2 reads the same data B, modifies it, and writes B = 200 back to the database. Transaction 1 re-reads B for the read value, and B is already 200, which is inconsistent with the first read value.
(4) phantom reading
This happens if another transaction can change the query result before submitting the query result. This statement can also be used to explain that transaction 1 does not submit the query results after reading some data records from the database according to certain conditions, and transaction 2 deletes some of the records, when transaction 1 reads data based on the same conditions again, it finds that some records have mysteriously disappeared; or transaction 1 does not submit the query results after reading some data records from the database based on certain conditions, transaction 2 inserts some records. When transaction 1 reads data again based on the same conditions, it finds that there are more records.
The main reason for the above four types of data inconsistency is that concurrent operations undermine the isolation of transactions. Concurrency Control is to use the correct method to schedule concurrent operations so that the execution of a user's transaction is not affected by other transactions, so as to avoid data inconsistency.
3,Concurrent scenario listing
Use SQL statements to list various concurrency situations (including situations that may result in data inconsistency and do not affect data consistency ). A Indicates a piece of data. B and c both indicate two or more pieces of data that meet a certain standard. ^ indicates the meaning of "Non, 1 indicates the first transaction, 2 indicates the second transaction.

(2)Concurrent Operation Scheduling
The Computer System Schedules parallel operations in parallel transactions randomly. Different scheduling may produce different results. Which of the following is the correct and which is incorrect?
If no other transaction is running at the same time during a transaction, that is, it is not disturbed by other transactions, then it may be considered that the transaction running result is normal or expected, therefore, the scheduling policy that serializes all transactions is the correct scheduling policy. Although serial execution of transactions in different sequence may produce different results, it can be considered correct because the database is not placed in an inconsistent state. The following conclusions can be drawn: the parallel execution of several transactions is correct, if and only if the results are the same as the serial execution of them in a certain order. This parallel scheduling policy is called serializable scheduling. Serializability is the only criterion for the correctness of parallel transactions.
For example, there are two transactions that contain the following operations:
Transaction 1: Read B; A = B + 1; write back;
Transaction 2: Read A; B = a + 1; write back B;
Assume that the initial value of A is 10 and that of B is 2. Three different Scheduling Policies for the two transactions are given. (a) and (B) are two different serial scheduling policies. Although the execution results are different, but they are all correctly scheduled. (C) The two transactions are staggered and the execution results are different from those of (a) and (B), so they are incorrect scheduling. (D) The two transactions in are also staggered. Because the execution results are the same as the execution results of serial scheduling 1 (figure (A), they are correctly scheduled.
To ensure the correctness of parallel operations, DBMS's parallel control mechanism must provide some means to ensure that scheduling is serializable.
Theoretically, scheduling policies that prohibit execution of other transactions during a transaction must be serializable scheduling, which is also the simplest scheduling policy, however, this method is actually not feasible because it prevents users from fully sharing database resources.
Currently, DBMS adopts the blocking method (pessimistic method, DM adopts this method, and SQL server also uses this method) to ensure the correctness of scheduling; this guarantees the serializability of parallel operation scheduling. There are other methods, such as the time-scale method and optimistic method.
● Pessimistic Concurrency Control
The locking system prevents users from modifying data in a way that affects other users. If a lock is applied because of the operation performed by the user, other users cannot perform operations that conflict with the lock until the lock owner releases the lock. This method is mainly used in environments where data competition is fierce, and when concurrent conflicts occur, the cost of using locks to protect data is lower than the cost of rollback transactions, therefore, this method is called pessimistic concurrency control.
● Optimistic Concurrency Control
In optimistic concurrency control, data is not locked when users read data. When an update is executed, the system checks whether the data has been changed after another user has read the data. If another user updates the data, an error occurs. Generally, the user who receives the error message will roll back the transaction and start again. This method is mainly used in environments with less data competition and in which the cost of occasionally rolling back transactions exceeds the cost of locking data when reading data. Therefore, this method is called optimistic concurrency control.
● Time-scale Concurrency Control
The basic difference between the time-scale and blocking technology is that blocking is used to synchronize concurrent execution (that is, cross execution) of a group of transactions, which is equivalent to a serial operation of these transactions; the time scale method also uses the cross-execution synchronization of a group of transactions, but makes it equivalent to a specific serial execution of these transactions, that is, an execution determined by the time series of the time scale. In the event of a conflict, a transaction is canceled and restarted. When the transaction is restarted, a new time scale is assigned.
In level-2 blocking protocols, the S lock can be released after data is read, so it cannot guarantee Repeatable read.
Level 3 blocking Protocol
The content of Level 3 blocking protocol is: Level 1 blocking Protocol plus transaction T must first apply the S lock to the data before reading it until the transaction ends.
In addition to preventing loss or overwriting of updated and non-dirty read data, the level 3 blocking Protocol further prevents non-repeated read and fantasy read. For example, the three-level blocking protocol is used to solve the problem of non-repeated reading and phantom reading.
Before reading A and B, transaction 1 first applies the S lock to A and B, so that other transactions can only apply the S lock to A and B, but not the X lock, that is, other transactions can only read A and B, but cannot modify them. Therefore, transaction 2 is rejected when applying for the X lock for B to modify B, so that other operations cannot be modified. Therefore, it can only wait for transaction 1 to release the lock on transaction B. Then, when transaction 1 checks and reads A and B again, the read B is still 100, and the sum is still 150, and the read can be repeated.
The main difference between the above three levels of protocols is what operations need to apply for a lock and when to release the lock (that is, holding the lock time ). The three-level blocking protocol can be summarized as the following table.
(6) serializable blocking protocol for Parallel Scheduling-two-segment blocking Protocol
Serializability is the only criterion for the correctness of Parallel Scheduling. The two-phase locking (2PL) protocol is a blocking protocol provided to ensure the serializability of parallel scheduling.
The two blocking protocols stipulate that:
① Before performing read and write operations on any data, the transaction must first obtain a blocking of the data, and ② After releasing a blocking, the transaction will no longer obtain any other blocking.
The so-called "two-segment" lock means that a transaction is divided into two stages. The first stage is the obtain lock, also known as the expansion stage, and the second stage is the release lock, also known as the contraction stage.
For example, the blocking sequence of transaction 1 is:
Slock A... Slock B... Xlock C... Unlock B... Unlock... Unlock C;
The blocking sequence of transaction 2 is:
Slock A... Unlock... Slock B... Xlock C... Unlock C... Unlock B;
Transaction 1 complies with the two blocking Protocols, while transaction 2 does not.
It can be proved that if all the transactions executed in parallel comply with the two-segment lock protocol, all the Parallel Scheduling Policies for these transactions are serializable. Therefore, we come to the following conclusion: the parallel results of all transactions that comply with the two-segment lock protocol must be correct.
It should be noted that the transaction's compliance with the two-segment lock protocol is a sufficient condition for serializable scheduling, rather than a necessary condition. In a serialized scheduling, not all transactions must comply with the two block blocking protocols. For example, in medium, both (a) and (B) are serializable scheduling, but (a) complies with the two-segment lock protocol and (B) does not comply with the two-segment lock protocol.
The deadlock problem has been thoroughly studied in the operating system and general parallel processing, but the database system has its own characteristics. The method to solve the deadlock in the operating system is not necessarily suitable for the database system.
Currently, there are two main methods to solve the deadlock problem in the database. One is to take some measures to prevent deadlock, and the other is to allow deadlock, use Certain methods to regularly diagnose deadlocks in the system. If yes, release them.
1. Deadlock Prevention
In the database system, the cause of the deadlock is that two or more transactions have blocked some data objects, and then they all request to lock the data objects that have been blocked by other transactions, A deadlock wait occurs. To prevent deadlocks, it is necessary to damage the conditions for creating deadlocks. There are two methods to prevent deadlocks.
● One blocking method
The one-time blocking method requires that each transaction lock all the data to be used at a time, otherwise the execution cannot continue. For example, in the example, if transaction T1 locks the Data Objects A and B once, T1 can be executed, while T2. After T1 is executed, the locks on A and B are released, and T2 continues. In this way, no deadlock will occur.
Although the one-time blocking method can effectively prevent deadlocks, there are also problems. First, locking all the data that will be used in the future will inevitably expand the scope of the block, thus reducing the concurrency of the system. Second, the data in the database is constantly changing. data that was originally not blocked may become blocked objects during execution, therefore, it is difficult to accurately determine the data objects to be blocked by each firm. We can only extend the blocking scope and lock all the data objects that may be blocked by transactions during execution, this further reduces the concurrency.
● Sequential Blocking
The sequential blocking method sets a blocking order for Data Objects in advance. All transactions are blocked in this order. In the preceding example, we set that the blocking sequence is A, B, T1, and T2. That is, T2 must also block A first. When T2. request A is blocked, T2. T2. After T1 releases the lock on A and B, T2. In this way, no deadlock will occur.
The sequential blocking method can also effectively prevent deadlocks, but it also has problems. First, there are a large number of data objects that can be blocked in the database system, and they are constantly changing with data insertion, deletion, and other operations. It is very difficult to maintain the blocking sequence of such many and changing resources, high cost. Second, the transaction blocking request can be dynamically determined with the execution of the transaction, it is difficult to determine the objects to be blocked for each transaction in advance, so it is difficult to apply the blocking in the specified order. For example, set the blocking sequence of data objects to A, B, C, D, and E. Transaction T3 originally required to block the Data Objects B, C, and E, but when it blocked B and C, it was discovered that A had to be blocked, which destroyed the blocking order.
It can be seen that the deadlock prevention policy widely adopted in the operating system is not very suitable for the database. Therefore, DBMS is more commonly used to diagnose and remove deadlocks in solving the deadlock problem.
2. deadlock diagnosis and Removal
The method for diagnosing deadlocks in the database system is similar to that in the operating system, that is, a transaction wait graph is used to dynamically reflect the wait state of all transactions. The concurrency control subsystem periodically checks the transaction wait diagram (for example, every 1 minute). If a loop exists in the diagram, it indicates a deadlock occurs in the system. For more information about how to diagnose deadlocks, see operating system related books.
Once the DBMS concurrency control subsystem detects a deadlock in the system, it should try to release it. The usual method is to select a transaction with the minimum deadlock cost, undo it, and release all the locks held by the transaction so that other transactions can continue to run.
Ii. Concurrency Control of DM
(1)Transaction isolation level
The isolation level of transactions describes the exposure of the given transaction behavior to other concurrent transactions. SQL-92 specifies four isolation levels. By selecting one of the four isolation levels, you can increase the exposure to other uncommitted transactions and get a higher concurrency. The isolation level is the degree to which a transaction must be isolated from other transactions.
The four isolation levels of the SQL-92 are shown below, and DM supports all of these isolation levels:
(1) read uncommitted: the lowest level of transaction isolation. The transaction may query uncommitted data of other transactions. It can only ensure that the data of physical damage is not read ).
(2) Read committed: the default dm level ensures that dirty data is not read.
(3) Repeatable read: ensure that Repeatable read is not allowed, but may be read into phantom data.
(4) serializable: the highest level of transaction isolation, full isolation between transactions.
DM allows you to change the isolation level and read/write features of unstarted transactions, and the set options will remain valid for that connection until the option is explicitly changed. Although setting the transaction isolation level exposes the programmer to risks arising from some integrity problems, it can be used in exchange for greater concurrent access to data. Compared with the previous isolation level, each isolation level provides greater isolation, but this is achieved by occupying more limit locks for a longer period of time. DM also provides a statement for setting the read-only attribute of a transaction. After using this statement, the transaction can only perform query operations and cannot update the database.
It should be noted that the isolation level of the transaction does not affect the data modification of the transaction itself, that is, the transaction can always view its own data modification. The transaction isolation level needs to be set according to actual needs. A lower isolation level can increase concurrency, but the cost is to reduce data correctness. On the contrary, high isolation levels can ensure data correctness, but may have a negative impact on concurrency. The isolation level required by the application determines the locking behavior used by DM.
The following table lists four isolation levels that allow different types of phenomena.
Note: Missing or overwriting updates are prohibited at all standard SQL isolation levels.
  (2) Concurrent processing
  1. Data Lock Mechanism
DM uses a data lock mechanism to solve concurrency problems. It can ensure that there can be multiple running transactions at any time, but all the transactions are run in a completely isolated environment.
The blocked object of DM is table and metadata. Automatic and manual locks are implemented, that is, implicit locks and explicit locks. The blocking of implicit blocking actions varies depending on the transaction isolation level. At the same time, DM provides users with four manual lock statements to adapt to user-defined application systems.
Generally, the implicit blocking of DM is enough to ensure data consistency, but you can change the blocking of the table as needed. DM provides four types of table locks: Intention SHARE (IS: intensive share), SHARE (S: SHARE), intention EXCLUSIVE (IX: intensive exclusive), and EXCLUSIVE (X: EXCLUSIVE ). For example, at the Read/submit isolation level, the system's default table lock IS or IX. Under these two table locks, you need to block the tuples before accessing them, to improve the system efficiency, You can manually block X from the table. In this way, you do not need to block access tuples.
The blocking mechanism aims:
(1) Consistency: ensure that the changed data does not change fundamentally when the user is viewing the data.
(2) Integrity: ensure that the basic structure of the database is in the correct order to accurately reflect all changes to them.
A "Lock" can be considered to be something that a process needs to prevent other processes from doing something, and "release" the lock when the process no longer cares about it, A lock is usually applied to a "resource" (some objects, such as tables.
The internal lock of DM is automatically completed. A shared lock is obtained when a process wants to view an object but cannot be modified by others. When a process needs to modify an object and prevent any other process from modifying it, the update method is locked. An exclusive lock is obtained when a process modifies an object and prevents any other process from modifying it or blocking it in a shared manner.
2,Lock type
There are three types of locks in DM: Table locks, row locks, and key range locks.
● Table lock
The table lock is used to block table objects. When the table is searched and updated, DM blocks the table objects. However, DM provides you with manual table lock statements, you can change the blocking type of a table as needed. Table lock mode: Intention share lock IS, intention exclusive lock IX, share Lock S, and exclusive lock X. The Compatibility Matrix of table locks can be defined as follows.
● Row lock
Row lock locks tuples. before accessing and updating tuples, DM locks them upstream. Manual row lock statements are not provided. Row locks can be divided into two modes: Shared locks (S) and exclusive locks (X). The compatibility matrix is defined in the following table.
● Key range lock
Key-range locks are used in serializable transactions to solve the phantom read concurrency problem. The key-range lock covers a single record and the range between records, which can prevent insertion or deletion of phantom records accessed by transactions. A key-range lock is only used for transactions that operate at the serializable isolation level.
Serializable requirements. If any query is executed again at a certain point after a transaction, the row set it obtains should be the same as the row set obtained when the query was previously executed in the same transaction. If the row to be extracted by this query does not exist, other transactions cannot insert the row before the transaction to access the row is completed. If another transaction is allowed to insert the row, it will appear as a phantom.
If another transaction tries to insert rows that reside on the locked data page, page-level locking prevents adding phantom rows and maintaining serializability. However, if the row is to be added to a data page that is not locked by the first transaction, set a locking mechanism to prevent the row from being added.
The key range Lock works by overwriting the range between the index row and the index row (instead of locking the entire base table row ). Because the index needs to be modified when the second transaction inserts, updates, or deletes any row within this range, and the key range lock covers the index items, therefore, the second transaction will be blocked before the first transaction is completed.
The key range lock is automatically executed by the system. The execution condition is: (1) the transaction isolation level is serializable; (2) the query result is obtained through an index.
After the user locks the lock, the lock will remain valid until the current transaction ends, the lock is automatically removed by the system.
  3. Comparison of lock types
4,SQL statement lock analysis
DM splits various DDL, GRANT, and other non-DML statements into add, delete, and modify statements. The following table lists the DML statements and query statements blocked by DM.
Note: S * indicates the instantaneous lock, which is released after the statement is completed; Range indicates the key Range lock.
The above table is handled by the system in general conditions. When the system detects the possibility of a lock upgrade, it will upgrade the lock. In general, the IS lock IS upgraded to the S lock, the IX lock IS upgraded to the X lock, and the row IS no longer blocked.
5,Custom locking improves System Efficiency
DM also provides two functions: SET_TABLE_OPTION ([db.] [sch.] tablename, option, value), SET_INDEX_OPTION ([db.] indexname, option, value) (for detailed syntax, see Chapter 8th of the DM_ SQL language user manual) for users to customize lock types to enhance system concurrency and improve system efficiency. These two functions are provided for expert users who have a clear understanding of the situations in which specific types of locks apply.
The SET_TABLE_OPTION () function is used to disable page-level locks, row-level locks, or both of them on a specified table. This setting takes effect for all indexes on the table. The SET_INDEX_OPTION () function is used to disable page-level locks, row-level locks, or both of them on a certain index.
For example, if you only need to modify a fixed-length field in the index, the modification operation will not split and merge Tree B. In this case, you can disable the page lock of the index. Another example is that when all users only perform insert operations, the same tuples are not operated between users. In this case, row-level locks can be disabled. When you do not add, delete, or modify a table, but only query the table, you can disable the page-Level Lock and row-Level Lock on the table at the same time. At this time, the highest concurrency is allowed.
  6. deadlock handling
Three methods to solve the deadlock problem: prevent deadlock, detect deadlock, and avoid deadlock. Deadlock Prevention requires the user process to declare the required resources in advance or apply for resources according to strict procedures. In principle, the Deadlock Detection should allow the occurrence of deadlocks. Check the deadlock at the appropriate time. If a deadlock occurs, then try to exclude it. Compared with deadlock prevention, the latter is too much, resulting in frequent deadlocks. To avoid deadlocks, the transaction is canceled. When the resource batch cannot be obtained, the Deadlock Detection is performed immediately. It is neither as conservative as preventing deadlocks nor as open as Deadlock Detection. Due to the timely detection, it can be seen from the induction that there is no deadlock in transactions that have been permitted to wait, therefore, the detection algorithm is relatively simple.
The DM4 system uses the deadlock prevention method. A deadlock check is performed whenever the resources occupied by a firm cannot be obtained immediately. If no deadlock exists, the transaction enters the waiting queue. Otherwise, DM4 is regarded as a runtime error and rolls back the current statement. Using this mechanism, from the user's point of view, dm4.
  7. What is the difference between adding an index and disabling an index?
With or without an index, the DM blocking mechanism will affect the actual blocking scope. The index function is to reduce the scanning of irrelevant data in the query. At the isolation level, scan data is always blocked. Therefore, using indexes can reduce the number of blockages and greatly reduce the possibility of conflicts.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.