Spring provides flexible and convenient transaction management capabilities, but these functions are based on the transaction mechanism of the underlying database itself. To gain insight into the transaction management and configuration of spring, it is necessary to learn the basics of database transactions first.
What is a database transaction
"A glory, a loss of loss" This sentence can reflect the idea of business, a lot of complex things to step, but they form a whole, either the whole, or the overall failure. This idea is reflected in the database, which is a number of SQL statements, either all executed successfully, or all execution fails.
Database transactions are strictly defined, and must satisfy 4 characteristics: atomicity (Atomic), consistency (consistency), isolation (isolation), and persistence (Durabiliy), which are referred to as acid. The following is a description of each feature.
- Atomicity: Multiple database operations that make up a single transaction are an indivisible atomic unit, and only all operations succeed, the entire transaction is committed, and any one of the database operations in the transaction fails, and any operations that have already been performed must be revoked to return the database to its original state.
- Consistency: After a successful transaction operation, the database is in a state that is consistent with its business rules, that is, the data is not compromised. If you transfer from a account 100 to a B account, regardless of the success of the operation, A and B of the total deposit is unchanged.
- Isolation: In concurrent data operations, different transactions have their own data space, and their operations do not interfere with each other. To be precise, it is not required to be completely non-intrusive, the database specifies a variety of transaction isolation levels, different isolation levels correspond to different degrees of interference, the higher the isolation level, the better the data consistency, but the weaker the concurrency.
- Persistence: Once the transaction commits successfully, all data operations in the transaction must be persisted to the database, even if the database crashes immediately after committing the transaction, and when the database restarts, it must be able to recover the data through some mechanism.
In these transactional attributes, data consistency is the ultimate goal, and other features are measures, requirements, or means to achieve this goal.
The database management system generally uses the redo log to ensure atomicity, consistency and persistence, the redo log records every action of the database changes, the database executes a part of the operation in a transaction after an error exits, the database can be revoked according to the redo log operations already performed. In addition, for transactions that have already been committed, even if the database crashes, the data that has not been persisted can be re-executed according to the log when the database is restarted.
and Java programs use the object lock mechanism for thread synchronization, the database management system uses the database lock mechanism to ensure the isolation of the transaction. When multiple transactions attempt to operate on the same data, only the transaction holding the lock can manipulate the data until the previous transaction completes, and the subsequent transaction has the opportunity to manipulate the data. The Oracle database also uses the mechanism of the data version, where the rollback segment holds a version for each change in the data, so that changes to the data do not affect the reading of the data. &NBSP
data concurrency problem
A database may have multiple access clients that can access the database concurrently. The same data in the database may be accessed by multiple transactions at the same time, and if the necessary isolation measures are not taken, it can cause various concurrency problems and destroy the integrity of the data. These problems can be attributed to Class 5, which includes 3 types of data-read problems (dirty reads, non-repeatable reads, and Phantom reads), and 2 types of data update issues (first-class missing updates and second-class missing updates). Below, we explain the scenario that caused the problem separately by example. &NBSP
Dirty Read (dirty read)
A transaction reads the change data that the B transaction has not yet committed and operates on the basis of this data. If the B transaction happens to be rolled back, then the data read by a transaction is not recognized at all. Look at the dirty read scenario that is thrown when the withdrawal transaction and the transfer transaction are concurrent:
In this scenario, B wants to withdraw $500 and then cancel the action, and a transfers 100 yuan to the same account because a transaction reads data from the B transaction that has not yet been submitted. , resulting in a loss of $500 in the account. In an Oracle database, dirty reads do not occur.
Quoting a stuttering person in front of a beverage store counter, the boss greeted him warmly and said, "Drink a bottle?" "Stutter hurriedly said:" I ... Drink... Drink ... "The boss was quick to open the cans handed to stutter, the stutter finally suppressed his words:" I ... Drink... Drink... I can't afford to drink it! ”
non-repeatable read (unrepeatable read)
Non-repeatable read refers to a transaction that reads the change data that the B transaction has committed. Suppose a in the process of withdrawal transaction, B transfers to the account 100 yuan, a two read the balance of the account is inconsistent:
In the same transaction, the T4 time Point and the T7 point-in-time read Account deposit balance are not the same.
Phantom Read (Phantom Read)
A transaction reads the new data submitted by the B transaction, at which point a transaction will have phantom read. Phantom reading generally occurs in the transaction of computing statistics, for example, assuming that the banking system in the same transaction, the total amount of two statistics deposit accounts, in two statistics process, just add a deposit account, and deposited 100 yuan, then two times the total amount of statistics will be inconsistent:
If the new data just satisfies the query criteria of the transaction, the data enters the field of view, resulting in two statistical inconsistencies.
Phantom Read and non-repeatable reading are two confusing concepts, the former refers to other new data that has been committed to the transaction, while the latter refers to the change data (changes or deletions) that have been committed to the transaction, in order to avoid these two situations, the countermeasures are different to prevent reading to the change data, Simply adding row-level locks to the operational data, blocking data changes in the operation, and preventing reading to new data often requires adding table-level locks-locking the entire table to prevent new data (implemented by Oracle using multi-version data).
first category missing updates
When a transaction is revoked, the updated data of the already committed B transaction is overwritten. This error may cause serious problems, as can be seen through the following account withdrawal transfer:
When a transaction is withdrawn, "careless" erases the amount of the B transaction that has been transferred to the account.
category two missing updates
A transaction overwrites the data already submitted by the B transaction, causing the operation of the B office to be lost:
In the above example, because the cheque transfer transaction covers the withdrawal transaction to the deposit balance of the update, resulting in the bank lost 100 yuan, on the contrary, if the transaction is first submitted, then the user account will lose 100 yuan.
database lock mechanism
Data concurrency can cause a lot of problems, and in some cases some issues are allowed, but in other cases it can be deadly. The database solves the problem of concurrent access through locking mechanism, although different databases differ in implementation details, but the principle is basically the same.
Depending on the object being locked, it can be divided into table lock and row lock, which locks the whole table and the latter locks the specific row in the table. From a relationship locked by a concurrent transaction, it can be divided into shared locks and exclusive locks. A shared lock prevents exclusive locking, but allows other shared locks. Exclusive locking prevents other exclusive locks as well as other shared locks. In order to change the data, the database must impose row exclusive locks on the rows that make the change, and the INSERT, update, delete, and select for UPDATE statements implicitly take the necessary row locks. Let's look at 5 types of locks commonly used in Oracle databases.
- Row sharing Lock: A row share lock is generally obtained implicitly through the Select FOR UPDATE statement, and in Oracle users can explicitly obtain row sharing locks through the lock TABLE in row SHARE mode statement. Row sharing locks do not prevent changes to data rows, but they can prevent other sessions from obtaining exclusive data table locks. Multiple concurrent row and row exclusive locks are allowed, and data table sharing is allowed or shared row exclusive locks are used.
- Row exclusive Lock: obtained implicitly through an INSERT, update, or DELETE statement, or explicitly through a lock TABLE in row EXCLUSIVE mode statement. This lock prevents other sessions from acquiring a shared lock, a shared row exclusive lock, or an exclusive lock.
- Table share Lock: explicitly obtained through the lock table in SHARE mode statement. This lock prevents other sessions from acquiring row exclusive locks (INSERT, update, or delete), or prevents other tables from sharing row exclusive locks or table exclusive locks, which allow multiple row and table share locks in the table. This lock allows the session to have transactional-level access to the table, because other sessions cannot change the locked table until the user commits or backtracking the transaction and frees the lock on the table.
- Table shared Row Exclusive: explicitly obtained through the lock table in SHARE row EXCLUSIVE mode statement. This lock prevents other sessions from acquiring a table share, row exclusive, or table exclusive lock, which allows other rows to share locks. This lock is similar to a table-sharing lock, except that only one table can be placed on one table at a time for exclusive locking of shared rows. If the a session has the lock, the B session can perform a select for update operation, but if the B session attempts to update the selected row, it waits.
- Table Exclusive: Explicitly obtained by lock table in EXCLUSIVE mode. This lock prevents other sessions from any other locks on the table.
Transaction ISOLATION Level
Although the database provides a way for users to lock DML operations, it is cumbersome to use lock management directly, so the database provides an automatic lock mechanism for users. As long as the user specifies the transaction isolation level for the session, the database parses the SQL statement in the transaction and then automatically adds the appropriate lock for the transactional operation's data resource. In addition, the database maintains these locks, and when the number of locks on a resource is too high, automatic lock escalation is performed to improve the performance of the system, which is completely transparent to the user.
The Ansi/iso SQL 92 standard defines a 4-level transaction isolation level that, in the same data environment, uses the same input to perform the same work, depending on the isolation level, which can result in different results. The ability to resolve data concurrency problems at different transaction isolation levels is different, as shown in table 9-1.
The isolation level of a transaction is opposed to the concurrency of the database, and it grows. In general, databases with the Read Uncommited isolation level have the highest concurrency and throughput, while database concurrency with the SERIALIZABLE isolation level is minimal.
SQL 92 defines read uncommited primarily to provide non-blocking read capability, although Oracle also supports read uncommited, but it does not support dirty reads, Oracle's Read Committed isolation level has met the SQL 92 standard REPEATABLE READ isolation level because Oracle uses a multi-version mechanism to completely resolve the problem of reading dirty data while non-blocking reads and ensuring read consistency.
SQL 92 recommends using repeatable read to ensure data read consistency, but users can choose the appropriate isolation level based on the needs of the application.
JDBC Support for transactions
Not all databases support transactions, and even if the database that supports the transaction does not support all transaction isolation levels, the user can obtain the DatabaseMetaData object through the Connection#getmetadata () method. The Supportstransactions (), supportstransactionisolationlevel (int level) method of the object is used to view the transaction support of the underlying database.
Connection is automatically committed by default, that is, each executed SQL corresponds to a transaction, in order to be able to execute multiple SQL as a transaction, you must first pass Connection#setautocommit (false) Prevents connection from automatically committing and sets the isolation level of the transaction through Connection#settransactionisolation (), which defines constants for the corresponding SQL 92 standard 4 transaction isolation levels in connection. A transaction is committed through Connection#commit (), and the transaction is rolled back through Connection#rollback (). Here is the code for the typical JDBC transactional data operation:
Code listing 9-1 JDBC Transaction code
Java code
- Connection Conn;
- try{
- conn = Drivermanager.getconnection (); //① getting a data connection
- Conn.setautocommit (false); //② shutdown mechanism for automatic commit
- Conn.settransactionisolation (connection.transaction_serializable); //③ Setting the transaction isolation level
- Statement stmt = Conn.createstatement ();
- int rows = stmt.executeupdate ( "INSERT into T_topic VALUES (1, ' Tom ')");
- rows = Stmt.executeupdate ( "UPDATE t_user set topic_nums = topic_nums +1" +
- "WHERE user_id = 1");
- Conn.commit (); //④ COMMIT Transaction
- }catch (Exception e) {
- ...
- Conn.rollback (); //⑤ rolling back transactions
- }finally{
- ...
- }
In JDBC 2.0, a transaction can end up with only two actions: Commit and rollback. However, some applications may need more control over transactions than simply committing or rolling them back. JDBC 3.0 (JDK 1.4 and later) introduces a new SavePoint feature, which allows the user to split a transaction into multiple stages, and the user can specify to roll back to a specific savepoint of the transaction instead of just rolling back to the point where the transaction started, as in JDBC 2.0, as shown in 9-1.
The following code uses the function of the savepoint to roll back to the specified savepoint when a specific problem occurs, rather than rolling back the entire transaction, as shown in Listing 9-2:
Code Listing 9-2 using the SavePoint's transaction code
Java code
- ...
- Statement stmt = Conn.createstatement ();
- int rows = stmt.executeupdate ( "INSERT into T_topic VALUES (1, ' Tom ')");
- SavePoint svpt = Conn.setsavepoint ("savePoint1"); ① setting a save point
- rows = Stmt.executeupdate ( "UPDATE t_user set topic_nums = topic_nums +1" +
- "WHERE user_id = 1");
- ...
- ② rolls back to the SQL operation before Savepoint1,① at ①, and commits after the entire transaction commits,
- But the SQL operation between ① to ② was revoked.
- Conn.rollback (SVPT);
- ...
- Conn.commit (); //③ COMMIT Transaction
Not all databases support the Save Point feature, and users can see whether support is available through the databasemetadata#supportssavepoints () method.
These articles are excerpted from my "Spring 3.x enterprise Application Development Practice", I will be in the form of a serial, in succession here. You are welcome to discuss.
Transaction Fundamentals-->spring Transaction Management