4 basic features of a business
The so-called transaction is a user-defined sequence of database operations, which are either done entirely or not, and are an inseparable unit of work. For example, in a relational database, a transaction can be an SQL statement, a set of SQL statements, or an entire program. Transaction ACID Properties. Acid is: atomicity (atomicity), consistency (consistency), isolation or independence (isolation), and persistence (durabilily).
Transactions and programs are two concepts. Generally speaking, a program contains multiple transactions.
The start and end of a transaction can be explicitly controlled by the user. If the user does not explicitly define transactions, the DBMS automatically divides things by default
Works In the SQL language, there are three statements that define a transaction:
BEGIN TRANSACTION
COMMIT
ROLLBACK
Die.
The display transaction is identified with BEGIN TRANSACTION and end transaction, where the update and DELETE statements are either all executed or not executed at all. Such as:
BEGIN TRANSACTION T1
Update student
Set Name= ' Tank '
where id=2006010
Delete from student
where id=2006011
Commit
End Transaction T1
Simply put, a transaction is a mechanism that maintains the integrity of the database.
The implementation form is to embed ordinary SQL statements into the BEGIN Tran ... Commit Tran (or full form Begin Transaction ... Commit Transaction), of course, you can also use rollback Tran to rollback a transaction, which is the undo operation, if necessary.
Using the transaction mechanism, the operation of the database is either all executed or not executed, ensuring the consistency of the database. SQL statements that require transactions are typically update and delete operations, and so on.
Generally we are familiar with the four kinds of isolation modes of business, from pine to strict in order:
-READ UNCOMMITTED (READ UNCOMMITTED): Dirty reads, Phantom reads, non-repeatable reads may occur in this mode-read Committed (Read Committed): Phantom read, non-repeatable read in this mode-repeatable read (REPEATABLE Read) : Phantom read-serial (Serialize) may occur in this mode: No phantom read so dirty read, Phantom Line, non-repeatable read what does it mean? -Dirty Read: Other transactions (execution of a single SELECT statement also counted as a transaction) can read data that has been updated by a transaction (including insertions and deletions) but uncommitted. Dirty Reading is an application that should be avoided because it reads unreliable data (I think this is called Phantom Line more image, actually not). The general database is not set to this mode, but it is sometimes used. The benefit of dirty reads is that the table or record is not locked when read, and can be queued around the write queue to avoid waiting. If you want to select all the data in a table in a particularly frequent update, you can display the specified isolation level: SELECT .... at isolation 0 -non-repeatable READ: This is a comparison of the execution results of two identical SELECT statements in the same transaction. If the result is the same as before and after, it is repeatable, and if the results can be different before and after, it is not repeatable to read. This feature can be literally seen. Non-repeatable read mode no dirty reads first, that is, the data being read is submitted. In a transaction, the read operation is not an exclusive lock, the next time an identical SELECT statement execution, the hit dataset may have been modified by other transactions, this time, can still read the same content? Therefore, to achieve repeatable read results, the database needs to do more things, for example, to read the data row to share the lock, and keep to the end of the transaction, to prohibit other transactions to modify it. This can degrade the performance of the database. The isolation level of the serial is more stringent than repeatable reading. The isolation level of the general database is set only to read Committed. This is the result of both reliability and performance. above also mentions only the lock on the hit data row to prevent other transactions from modifying it. But not mentioned, what if other transactions increase the qualifying data rows? Some databases have a new two levels defined for this scenario: read stability and cursor stability. The former does not restrict new rows of data that match the criteria, while the latter blocks the addition of such rows. -Phantom read: means that two executions of the same SELECT statement will result in different results, the second read will add a row of data, and does not say that the two executions are in the same transaction. OneIn a situation like this, phantom reading should be exactly what we need. But sometimes it's not, if you open a cursor, you don't want the new record to be added to the data set that the cursor hit when you manipulate the cursor. The isolation level is cursor stability and can prevent phantom reads.
1. atomicity (atomicity)
The atomicity attribute is used to identify whether the transaction is complete completely, any updates to a transaction are fully completed on the system, and if for some reason the transaction cannot complete its full task, the system returns to the state before the transaction begins.
Let's look at the example of bank transfer again. If an error occurs during the transfer process, the entire transaction will be rolled back. Writes a transaction to disk and makes the change permanent only if all parts of the transaction are executed successfully. To provide the ability to rollback or undo uncommitted changes, many data sources take the logging mechanism. For example, SQL Server uses a pre-written transaction log that is written on the transaction log before the data is applied to (or submitted to) the actual data page. However, some other data sources are not relational database management systems (RDBMS), and they manage uncommitted transactions in a completely different way. As long as the transaction is rolled back, the data source can undo all uncommitted changes, so this technique should be available for managing transactions.
2. Consistency (consistency)
Transactions enforce consistency in system integrity, which is achieved by ensuring that any transactions of the system are finally in a valid state. If the transaction completes successfully, all changes in the system are applied correctly and the system is in a valid state. If an error occurs in the transaction, all changes in the system are automatically rolled back and the system returns to its original state. Because the transaction is open
System is in a consistent state at the beginning, so the system is still in a consistent state. Let's look back at the example of bank transfer, where the account is active until the account is converted and the funds are transferred. If the transaction completes successfully and the transaction is committed, the account is in a new, valid state. If the transaction fails, the account returns to its original valid state after termination.
Remember that transactions are not responsible for enforcing data integrity, but are only responsible for ensuring that data is returned to a consistent state after a transaction commits or terminates. The task of understanding data integrity rules and writing code to achieve integrity usually falls on the developer's shoulders, and they are designed according to business requirements. When many users use and modify the same data at the same time, the transaction must maintain the integrity and consistency of its data. Therefore, we further study the next feature in the A C I D feature: Isolation.
3. Isolation (Isolation)
Perform transactions in an isolated state so that they appear to be the only operations that the system performs within a given time. If there are two transactions that run at the same time and perform the same function, the isolation of the transaction will ensure that every transaction in the system considers only that the transaction is in use by the system. This attribute is sometimes called serialization, and in order to prevent confusion between transactional operations, the request must be serialized or sequenced so that only one request is used for the same data at a time. It is important to perform transactions in an isolated state where the state of the system may be inconsistent and ensure that the system is in a consistent state before ending the transaction. However, in each individual transaction, the state of the system may change. If a transaction is not running in an isolated state, it may access data from the system and the system may be in an inconsistent state. By providing transaction isolation, you can prevent this kind of event from occurring. In the case of the bank, this means that within this system, other processes and transactions do not see any changes in our transactions until our transaction is complete, which is important for termination. If there is another process that is processed according to the account balance, and it can see the change before our transaction is complete, the decision of the process may
Based on the wrong data, because our transaction may terminate. This is the reason why the transaction has changed until the transaction is complete before the other parts of the system are visible. Isolation not only guarantees that multiple transactions cannot modify the same data at the same time, but also ensures that changes in transaction operations are not visible to another transaction until the change is committed or terminated, and that the concurrent transactions have no effect on each other. This means that all data that is required to be modified or read has been locked in the transaction until the transaction is complete to be freed. Most databases, such as SQL Server and other RDBMS, are isolated by using locks, and each data item or dataset involved in the transaction uses locks to prevent concurrent access.
4. Persistence (durabilily)
Persistence means that any changes that occur in the system will be permanent once the transaction is executed successfully. There should be some checkpoints to prevent loss of information if the system fails. Even if the hardware itself fails, the state of the system can still be rebuilt by the task of recording the transaction completed in the log. The concept of persistence allows the developer to assume that no matter what happens after the system changes, the end
is a permanent part of the system. In the case of banks, the transfer of funds is permanent and remains in the system. This may sound simple, but this relies on writing data to disk, and it is particularly necessary to note that the disk is not written until the transaction is fully completed and committed. All of these transactional attributes, regardless of how they are associated internally, ensure that the data involved in the transaction is managed correctly from the start of the transaction to the completion of the transaction, regardless of the success of the transaction, and when the transaction system creates the transaction, it ensures that the transaction has some characteristics. The developer of the component assumes that the nature of the transaction should be something that does not need to be managed by them personally.
Second, why the need for transaction concurrency control
If the transaction does not have concurrency control, let's see if the database concurrency is the exception
1. Missing updates (Lost update)
Two transactions update a row of data at the same time, but the second transaction fails to exit, resulting in two modifications to the data that are invalidated.
2. Dirty Reading (Dirty Reads)
A transaction begins reading a row of data, but another transaction has updated the data but failed to commit it in time. This is quite dangerous because it is possible that all operations are rolled back.
3. Non-repetition reading (non-repeatable Reads)
A transaction repeatedly reads the same row of data two times, but gets different results. The same query occurs more than once in the same transaction, and non-repeating reads occur each time a different result set is returned because of modifications or deletions made by other committed transactions.
4, two categories of lost updates (Second lost updates problem)
A special case that cannot be read repeatedly. There are two concurrent transactions that read the same row of data at the same time, and one of them modifies the commit, and the other commits the modification. This will cause the first write operation to fail.
5. Phantom Reading (Phantom Reads)
The transaction is queried two times during the operation, and the result of the second query includes the first check
Data that does not appear in the query (this does not require the same SQL statement for two queries). This is due to the fact that another transaction was inserted into the data during the two queries.
Third, the isolation level of the database
To account for concurrency efficiency and exception control, 4 transaction isolation levels are defined in the standard SQL specification (Oracle and Sqlserer have different implementations for standard isolation levels)
1. Non-submitted reading (READ UNCOMMITTED)
The literal translation is "READ UNCOMMITTED", meaning that even if an UPDATE statement is not committed, do not
Transaction can read this change. It's not safe. Allows the task to read uncommitted data changes in the database, also known as dirty reads.
2. Submit Read (Committed)
Literal translation is "read Commit", which prevents dirty reading, meaning that after the statement is committed after the commit is executed
This change can be read in other matters. Only data that has been submitted can be read. Most databases, such as Oracle, are this level by default
3. Repeatable READ (REPEATABLE Read):
Literal translation is "can be repeated reading", this means that in the same transaction executed the same query in succession, the result is the same. The query within the same transaction is the same as the beginning of the transaction, InnoDB the default level. In the SQL standard, this isolation level eliminates non-repeatable reads, but there are also phantom reads
4. Serial Read (Serializable)
The literal translation is "serialization", meaning that the transaction does not allow other transactions to execute concurrently. Fully serialized read, each read need to obtain a table-level shared lock, read and write each other will block
Iv. control of the transaction concurrency at the isolation level
The following table is the ability of each isolation level to control various exceptions.
|
Lu lost Update |
Dr Dirty Read |
NRR non-repeatable READ |
SLU category Two missing updates |
PR Phantom Read |
Non-committed read RU |
Y |
Y |
Y |
Y |
Y |
Submit Read RC |
N |
N |
Y |
Y |
Y |
REPEATABLE Read RR |
N |
N |
N |
N |
Y |
Serial Read S |
N |
N |
N |
N |
N |
By the way, a small example.
My_sql:
--Business One
Set TRANSACTION ISOLATION level serializable
BEGIN Tran
INSERT into test values (' xxx ')
--Business Two
Set TRANSACTION ISOLATION LEVEL Read Committed
BEGIN Tran
SELECT * FROM Test
--Business Three
Set TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN Tran
SELECT * FROM Test
After performing the transaction one in Query Analyzer, perform transactions two, and three, respectively. The result is that transaction two waits, and the third of the transaction executes.
ORACLE:
--Business One
Set TRANSACTION isolation level serializable;
INSERT into test values (' xxx ');
SELECT * from Test;
--Business Two
Set TRANSACTION ISOLATION LEVEL read Committed--oracle default levels
SELECT * FROM Test
After executing a transaction, perform transaction two. The result is that transaction two reads only the original data, ignoring the insert operation of transaction one.
V. Solutions to concurrency consistency issues
1 Blockade (Locking)
Blocking is a very important technique for implementing concurrency control. The so-called blockade is a transaction t before a data object such as tables, records, and so on, before the system to make a request to lock it. After locking, the transaction T has some control over the data object, and other transactions cannot update the data object until the transaction T releases its lock. There are two basic types of blocking: exclusive locks (Exclusive locks précis-writers for x locks) and shared locks (Share locks précis-writers for s locks).
Exclusive lock is also called write lock. If the transaction t adds an X lock to the data object A, only T reads and modifies a, and no other transaction can add any type of lock to a, until T releases the lock on A. This guarantees that the other transaction can no longer read and modify a when T releases the lock on A.
Shared locks are also known as read locks. If the transaction t adds the S lock to the data object A, the other transaction can only be locked to a plus s, not the x lock, until T releases the S lock on a. This guarantees that the other transaction can read a, but cannot make any modifications to a when T releases the S lock on a.
2 Blockade Protocol
When using the two basic blocks of X-Lock and S-lock, it is also necessary to stipulate some rules when locking data objects, such as when to apply for X-lock or S-lock, lock-in time, when to release, etc. We call these rules the blockade agreement (Locking Protocol). The different rules governing the way in which closures are imposed have created different kinds of blockade agreements. The three-level lockdown protocol is described below. The level three blocking protocol solves the inconsistency problem of lost modification, non-repeatable reading and reading "dirty" data in different degree, and provides certain guarantee for the correct scheduling of concurrent operation. The following is only a definition of level three blockade protocol, no more discussion.
The 1-level blocking protocol is that transaction T must be X-locked before the data R is modified until the end of the transaction is released. The end of the transaction consists of a normal end (COMMIT) and an abnormal end (ROLLBACK). A Level 1 blocking protocol prevents loss of modification and guarantees that transaction T is recoverable. In a Level 1 blocking protocol, if only the read data does not modify it, it does not need to be locked, so it does not guarantee repeatable read and not read "dirty" data.
Level 2 Blocking protocol is: Level 1 blocking protocol plus transaction T before reading the data r must be added to the S lock, after reading can release S lock. The Level 2 blocking protocol prevents the loss of modifications and further prevents the reading of "dirty" data.
Level 3 Blocking protocol is: Level 1 blocking protocol plus transaction T before reading the data R must first lock it, until the end of the transaction is released. The Level 3 blocking protocol prevents non-repeatable reads, in addition to preventing loss of modification and non-reading of ' dirty ' data.
Vi. general procedures for dealing with concurrency problems:
1, open the transaction.
2, apply for write permission, that is, to the object (table or record) lock.
3, if the failure, the end of the transaction, after a retry.
4, if successful, that is, to lock the object successfully, to prevent other users to open the same way.
5, for editing operations.
6. Write the results of the edits.
7. If the write is successful, commit the transaction and complete the operation.
8. If the write fails, rollback the transaction and cancel the commit.
9, (7.8) The two-step operation has released the locked object, reverting to the state before the operation.
Lock and isolation level relationships
In general, the actual development, the direct operation of the database in the probability of the various locks is relatively small, more is the use of the database provided by the four isolation level, uncommitted read, read-committed, Repeatable read, Serializable, what is the isolation level and lock relationship? In layman's terms, the isolation level is a holistic packaging solution for locks, and my understanding is that isolation encapsulates locks.
Isolation level from top to bottom, the lower the level, causing more problems, such as dirty read, lost updates, but the higher the level, it means that more locks need to be managed, not parallel processing, performance damage, therefore, when we design the system, we only need to choose a current appropriate isolation level according to business needs. An isolation level, a set of schemes that utilize locks, designed to balance performance and functionality.
An exclusive lock can read but not perform other operations, the transaction sets the lock, the transaction obtains the resource separately, and the other transaction cannot obtain a shared or exclusive lock on the same object before the transaction commits.
Database transaction ISOLATION level and lock