CurrentJ2EEA common problem faced by the project is that, although some persistence layer frameworks have done a lot of work for us to control the concurrent access of transactions, we can understand the principle, it is still very useful for our development.
This article summarizes several problems encountered in the current Persistence Layer Design of J2EE based on Hibernate and JPA standards:
Transaction concurrency access control policy
In the current J2EE project, a common problem is that if you control concurrent access to transactions, although some persistence layer frameworks have done a lot for us, but understand the principle, it is still very useful for our development.
Transaction concurrent access can be divided into two types: the same system transaction and cross-transaction access concurrency access control. The same system transaction can adopt optimistic locks and pessimistic locks, optimistic offline locks and pessimistic offline locks are required for cross-system transactions. Before discussing these four concurrent access control policies, we need to clarify the database transaction isolation level. The ANSI Standard specifies four database transaction isolation levels, which are:
Read Uncommitted)
This is the lowest level of transaction isolation. Read transactions do not block read transactions and write transactions. Write transactions do not block read transactions, but write transactions are blocked. The result is that when a write transaction is not committed, the read transaction can still be read, resulting in dirty read.
Read Committed)
When this isolation sector is used, write transactions will block read transactions and write transactions, but read transactions will not block read transactions and write transactions, because write transactions will block read transactions, as a result, the read transaction cannot read dirty data, but because the read transaction does not block other transactions, it will still cause the problem of non-repeated read.
Repeatable Read)
With this isolation level, read transactions will block write transactions, but read transactions will not block read transactions, but write transactions will block write transactions and read transactions. Because the read transaction blocks the write transaction, this will not cause the problem of non-repeated read, but this still cannot avoid phantom read problems.
Serializable)
This isolation level is the strictest isolation level. If it is set to this level, all the above problems will not occur (dirty reading, repeatable reading, phantom reading ). However, this will greatly affect the performance of our system. Therefore, we should avoid this isolation level. On the contrary, we should adopt a lower isolation sector, then, use the concurrency control policy to control concurrent access to transactions ).
In fact, we can also set the transaction isolation level to serializable, so that we do not need to adopt the concurrency control policy, the database will do all the concurrency control for us, however, this will seriously affect the scalability and performance of our system. In practice, we generally use Read committed or lower transaction isolation levels, use various concurrent access control policies to control concurrent transactions. The following is a summary of common control policies:
1 optimistic lock
Optimistic Locking is a common strategy in the same database transaction, because it can improve the concurrency access control while maintaining high performance of our system. Optimistic lock, as its name implies, is to maintain an optimistic attitude. We believe that the concurrent updates of transactions in the system will not be very frequent, even if the conflict happens, it will be okay, and it will be a big deal to come back again. The basic idea is that every time a transaction update is committed, we want to see if the modified things have been modified by other transactions since the last read, then the update will fail ,.
Finally, we need to clarify a problem because optimistic locks do not actually lock any records, so if the transaction isolation level of our database is set to read committed or lower isolation sectors, therefore, the issue of non-repeated reads cannot be avoided (because the read transaction will not block other transactions at this time). When optimistic locks are adopted, the system should allow non-repeated reads.
After learning about the optimistic lock concept, how can we use this strategy in our system? Generally, the following three methods can be used:
Version Field: Add a version control field to our entity. After each transaction update, add the value of the version field to 1.
Timestamps ):After this policy is adopted, the current system time and the object loading time will be compared each time when an update is submitted. If they are inconsistent, the optimistic lock failure will be reported, to roll back the transaction or submit it again. There are some shortcomings in using timestamps. For example, in a cluster environment, the time synchronization of each node may be faulty. If the interval of concurrent transactions is smaller than the minimum clock unit on the current platform, then the problem of overwriting the previous transaction result will occur. Therefore, it is better to use the version field.
Detection Based on all attributes: When using this policy, you need to compare whether each field has been modified after reading it. Therefore, this policy is difficult to implement and requires comparison of each attribute. If hi-fi is used, because Hibernate can perform dirty detection in the first-level cache, it can determine which fields have been modified and dynamically generate SQL statements for updates.
Next we will summarize how to use optimistic locks in JDBC and Hibernate:
Use optimistic locks in JDBC: If we use JDBC to implement the persistent layer, we can adopt the three policies that support optimistic locks, add a version field or Date field to the object, or use a policy based on all attributes. The following uses the version field for Demonstration:
If the system has an Account entity class and we add a version field to the Account, the JDBC SQL statement will be written as follows:
- Select a.version....from Account as a where (where condition..)
- Update Account set version = version+1.....(another field) where version =?...(another contidition)
In this way, we can judge by the number of rows in the update result. If the number of rows in the update result is 0, it indicates that the object has been changed by other transactions since it was loaded, therefore, a custom Optimistic Locking exception is thrown (or an exception system encapsulated by Spring can be used ). The specific example is as follows:
- .......
- int rowsUpdated = statement.executeUpdate(sql);
- If(rowsUpdated= =0){
- throws new OptimisticLockingFailureException();
- }
- ........
When using the jdbc api, We need to update and judge the version field in each update statement, therefore, if you are not careful, the version field will not be updated. On the contrary, the current ORM framework has done everything for us, all we need to do is add the version or Date Field in each object.
Use optimistic locks in Hibernate: If we use Hibernate as the persistence layer framework, it will become very easy to implement optimistic locks, because the framework will generate corresponding SQL statements, this not only reduces the burden on developers, but also prevents errors. The following is a summary using the version field:
Similarly, if the system has an Account entity class, we add a version field to the Account,
- Public class Account {
- Long id;
- .......
- @ Version // you can also use an XML file for configuration.
- Int version
- .......
- }
In this way, every time we commit a transaction, hibernate will generate the corresponding SQL statement to add the version field to 1 and perform the corresponding Version Detection. If the concurrency Optimistic Locking exception is detected, then StaleObjectStateException is thrown.
2 pessimistic lock
The so-called pessimistic lock, as its name implies, is to adopt a pessimistic attitude to deal with the transaction concurrency problem. We believe that the concurrent updates in the system will be very frequent, and the overhead of re-occurrence after the transaction fails, in this way, we need to adopt a real lock for implementation. The basic idea of the pessimistic lock is that every time a transaction reads a record, it will lock the record so that other transactions want to update, wait until the previous transaction is committed or rolled back to unlock the lock.
At last, we still need to clarify a problem. If we set the isolation level of database transactions to read committed or lower, we control the non-repeated read through the pessimistic lock, however, phantom reading cannot be avoided (because we need to set the database isolation level to Serializable to avoid this problem, and generally we will adopt the Read committed or lower isolation level, and use optimistic or pessimistic locks to achieve concurrency control, So Phantom read problems cannot be avoided. If you want to avoid phantom read problems, then you can only rely on the serializable isolation level of the database (Fortunately, phantom read problems are generally not serious ).
Here we will summarize JDBC and Hibernate respectively:
Use pessimistic locks in JDBC: to use pessimistic locks in JDBC, you need to use the select for update statement. If we have an Account class in the system, we can use the following method:
- Select * from Account where ...(where condition).. for update.
When the for update statement is used, each time a record is read or loaded, the loaded record is locked, if other transactions want to update or load this record, the transaction will be blocked because the lock cannot be obtained. This avoids repeated reads and dirty reads, however, other transactions can still insert and delete records, so that two reads from the same transaction may obtain different result sets, but this is not a problem caused by the pessimistic lock, this is a problem caused by the database isolation level.
Note that in each conflicting transaction, we must use the select for update statement to access the database. If some transactions do not use the select for update statement, then it will easily cause errors, which is also a disadvantage of adopting JDBC for pessimistic control.
Using pessimistic locks in Hibernate: Compared with using pessimistic locks in JDBC, using pessimistic locks in Hibernate is much easier, because Hibernate has APIs for us to call, to avoid writing SQL statements directly. Here is a summary of Hibernate's use of pessimistic locks:
First, we need to clarify the two modes that support pessimistic LockMode in Hibernate. UPGRADE uses LockMode. UPGRADE_NO_WAIT. (PS: in JPA, the corresponding lock mode is LockModeType. read, which is different from Hibernate)
If our system has an Account class, the specific operations can be as follows:
- .......
- session.lock(account, LockMode.UPGRADE);
- ......
Alternatively, you can use the following method to load objects:
- session.get(Account.class,identity,LockMode.UPGRADE).
In this way, when an object is loaded, hibernate generates a select for update statement to load the object, locking the corresponding record and avoiding concurrent updates to other transactions.
The above two policies are for the same transaction. If we want to implement concurrency control across multiple transactions, we need to adopt the other two concurrency control policies. The following is a summary:
C ++ and java are two completely different styles. C ++ is a standard created by programmers and perfected by programmers, that is to say, the C ++ standard lags behind the development of C ++. Java is just the opposite. It comes with standards first (maybe not yet implemented), and later implementations, and it is developed by the company. Although it is now open-source, but not everyone can set the standards. This has created C ++, which is rich and profound. Few dare to say that C ++ is very powerful.
Java is another way of thinking. Everything is well defined. You only need to follow the rules to comply with the standards. Therefore, C ++ is the kind of straightforward and profound (such as the standard library) that can be implemented, and the incredible ability to implement it. It's awesome to write a Boost library ). Java does not work. This is the only requirement for java.