Hibernate data changes can not be updated in time

Source: Internet
Author: User
Tags sessions

Mainly in the new, modified data, in the data list can not display the data just inserted in the associated object information (new inserted or modified data main data can be displayed, only the associated data cannot be displayed), but refresh can be displayed, and then refresh may not show up. A random display or inability to display. I don't understand.

When you manually modify the database, the data in the Hibernate cache may be out-of-date. In order to ensure hibernate consistent with the database, the general
The practice is to empty the cache before you use Hibernate to query the data after you manually modify the database. In other words, you should invoke the query before executing the
Session.clear ().

Reference: Hibernate cache Management

1. Caching Overview

Cache is an example of a set of in-memory collections in a Java application that holds a backup of data from a permanent storage source, such as a file or database on a hard disk, and reads faster than it reads and writes to a hard disk. The application reads and writes directly to the data in the cache at run time, and updates the data store source synchronously only at certain times according to the data in the cache. If the amount of data stored in the cache is very large, the hard disk is also used as a cached physical medium

Caching helps improve the performance of applications by reducing the frequency at which applications can read and write to persistent data storage sources directly

The implementation of the cache requires not only hardware (memory) as a physical medium, but also software that manages policies such as concurrent access and expiration of the cache

2. Cache Range Classification

The scope of the cache determines the cached declaration cycle and who can access it. A total of three categories

1) Transaction Scope


A transaction-scoped cache can only be accessed by the current transaction. Each transaction has its own cache, and the data in the cache usually takes the form of an interrelated object. The lifecycle of the cache depends on the lifecycle of the transaction, and the cached lifecycle ends only when the transaction ends. Transaction-scoped caching uses memory as a storage medium, The first level of caching is a transaction scope.


2 Application range (also called process range)


The application's cache can be accessed by all transactional shares within the application scope. The lifecycle of the cache is dependent on the lifecycle of the application, and the caching lifecycle ends only when the application ends. Application-scoped caching can use either memory or hard disk as a storage medium, and level two caching is an application area.


3) Cluster range


In a clustered environment, caching is shared by a machine or multiple machine processes, the data in the cache is replicated to each process node in the cluster environment, and the data in the cache is guaranteed to be consistent between processes through remote communication, which usually takes the form of loose data in the object.


For most applications, it is prudent to consider the need to use cluster-wide caching because the speed of access does not necessarily bypass the direct access to database data much faster


3. Concurrent access policies for caching

When multiple concurrent errors simultaneously access the same data of the cache of the persisted layer, the concurrency problem is raised, and the necessary fault isolation measures must be used

In a process-wide or cluster-wide cache, concurrency problems occur, so you can set up four types of concurrent access policies, each of which corresponds to a transaction isolation level. Transactional concurrency access policy is the highest level of transaction isolation, with the lowest isolation level for read-only types. The higher the transaction isolation level, the lower the concurrency performance


1 transaction type: Only applicable in a managed environment. It provides the REPEATABLE read transaction isolation level. This type of isolation can be used for data that is often read but rarely modified, because it prevents dirty reads and non-duplication of concurrency problems.


2 Read-write: Provides the Read committed transaction isolation level. Apply only in a non-clustered environment. For data that is often read but rarely modified, this isolation type can be used because it prevents dirty reading of such concurrency problems.


3 non-rigorous read-write type: does not guarantee the cache and data consistency in the database. If there are two transactions that can access the same data in the cache at the same time, you must configure a very short data expiration time for the data to avoid dirty reads as much as possible. This concurrent access policy can be used for data that is rarely modified and allows occasional dirty reads.


4 Read-only: This concurrent access policy can be used for data that is never modified, such as reference data.


caching in the Hibernate

The Hibernate provides level two caching, the first level is the session-level cache, which is a transaction-scoped cache, and the second level cache is a sessionfactory-level cache, which is a cache in or out of range or cluster range. This level of caching can be configured and changed, and dynamic loading and unloading can take place. Hibernate also provides a query cache for query results that relies on a second-level cache


First -level cache management:

The hibernate cache is provided by the session, so it exists only in the session lifecycle, when the program calls save (), update (), Saveorupdate (), and invokes the query interface List,filter, Iterate, if the corresponding object does not exist in the session cache, Hibernate adds the object to the first level cache.
The first level of cache managed by this session is also cleared immediately when the sessions are closed
The hibernate cache is built into the session, cannot be uninstalled, and cannot be configured

The first level caching is implemented by the Key-value map method, when the entity object is cached, the object's primary key ID is the key of the map, and the entity object is the corresponding value. So, first-level caching is stored as an entity object, and the primary key ID is used at the time of access.
Although hibernate uses automatic maintenance for the first-level caching and does not provide any configuration functionality, it is possible to manually intervene in the management of one-level caching through the methods provided in the session. The intervention methods provided in the session include the following two types
Evict (): Used to purge an object from the first-level cache of the session
The evict () method applies to the following two scenarios:
1 data updates that do not require synchronization of the object
2 in batch update and delete, when the update deletes each object, to release the memory occupied by this object.

Clear (): Used to purge all objects in a first-level cache. </p>

<p class=msonormal> consumes a lot of memory to cache objects that are updated when a one-time update is made on a large amount of data. The clear () method should be called periodically to empty the objects in the first cache and to control the size of the first cache to avoid a memory overflow.
Hibernate cache processing method for large batch update:
(assuming that the age of our user table has 5,000 records greater than 0)
Session session =sessionfactory.opensession ();
Transaction TX =session.begintransaction ();
Itertaor users=session.find ("from User U where u.age>0"). Itertaor ();//HSL Statement no explanation.
while (User.hasnext ()) {
User user = (user) users.next ();
User.setage (User.getage () +1);
Write the object inserted in this batch immediately to the database and free memory
Session.flush ();
Session.clear ();
}
Tx.commit ();
Session.close ();
When handling large amounts of data with hibernate ... You must execute the UPDATE statement 5,000 times before you can update the 5,000 user objects ...
This affects the performance of the operation .... In the project when we encounter performance and space problems,,, to performance-oriented. That means sacrificing space.

So it's best to skip the Hibernate API and get it directly through the JDBC API ...

Let's change the code above:
Session session=sessionfactory.opensession ();
Transaction TX =session.begintransaction ();
Connection Conn =session.connection ();
PreparedStatement pstmt = conn.preparestatement ("Update users set age=age+1" + "where Age >0");
Pstmt.executeupdate ();
Tx.commit ();
Although this is done through the JDBC API. But in essence, the boundary of the transaction is declared through the interface of the Hibernater transaction transaction ...

In fact, the best solution is to create a stored procedure, run it with the underlying database. This performance is good, the speed is fast ....

I'll simply take the Oracle database as an example. Create a stored procedure named Userupdate ... Then make a call in the program ...
Userupdate Stored Procedure Code:
Create or replace Procadure userupdate (u_age in number) as
Begin
Update users set age=age+1 where age>u_age;
End
The following is how to invoke our named stored procedure in a program
Session session =sessionfactory.opensession ();
Transaction TX =session.begintransaction ();
Connection conn=session.connection ();
String str= "{call Userupdate (?)}";
CallableStatement cstmt= Conn.preparecall (str);
Cstmt.setint (1,0);
Cstmt.executeupdate ();
Tx.commit (); Note that stored procedures are not supported in open source MySQL.
The benefit of using the JDBC API is this.
It does not have to load large amounts of data into memory before it is updated and modified. So it doesn't consume a lot of memory ....
(Small program can not see what the difference. The difference between the Hibernate API and the JDBC API is naturally found when the data record reaches a certain amount of data.
In one is only one record for batch update. Unlike hibernate, which updates every piece of ...

The first level is the cache of sessions. Because the lifecycle of a session object typically corresponds to a database transaction or an application transaction, its cache is a transaction-scoped cache. The first level cache is required and is not allowed and in fact cannot be compared to dismount. In the first level cache, each instance of the persisted class has a unique OID.



level two cache management

The second level cache is a pluggable cache plug-in that is managed by sessionfactory. Because the lifecycle of the Sessionfactory object corresponds to the entire process of the application, the secondary cache is a process-wide or cluster-wide cache. Loose data for objects stored in this cache. Second-level objects may have concurrency problems, so an appropriate concurrency access policy is required, which provides a transaction isolation level for the cached data. The caching adapter is used to integrate the specific cache implementation software with the hibernate. The secondary cache is optional and can be configured with a second level cache on the granularity of each class or collection.


The general process of Hibernate's level two caching strategy is as follows:


1 when the condition is queried, always issue a SELECT * FROM table_name where .... (select all fields) such SQL statements query the database and get all the data objects at once.


2 All the data objects obtained are put into the second level cache according to the ID.


3 when hibernate according to ID access to data objects, first from the session-level cache, check, if configured with a level two cache, then from the level two cache, check, and then query the database, the results in accordance with the ID into the cache.


4 when deleting, updating, and adding data, update the cache at the same time.


Hibernate's level Two caching strategy is for the cache policy for the ID query, and it has no effect on the conditional query. To do this, hibernate provides the query cache for conditional queries.


The procedure for Hibernate's query caching policy is as follows:


1) Hibernate first based on this information to form a query Key,query Key includes the request general information of the conditional query: SQL, SQL required parameters, record range (starting position rowstart, maximum number of records maxrows), etc.


2 Hibernate according to this query key to the query cache to find the corresponding results list. If so, return the list of results, if not, query the database, get a list of results, and put the entire result list into the query cache according to the query key.


3 The SQL in Query key involves some table names, and if any of these tables are modified, deleted, incremented, and so on, the associated query key is emptied from the cache.


The following four kinds of data are suitable for storage in Level two cache:

1 data that is rarely modified

2) is not very important data, allowing occasional concurrent data

3 Data not to be asked by concurrent rhetorical questions

4 reference data, refers to the supply of reference to the constant data, its number of instances are limited, its instances are referenced by many other classes of instances. Its instances are rarely or never modified

For data that is often modified, such as financial data (which is absolutely not allowed concurrency) and other applications that share data, these cannot be placed in the secondary cache


Common Cache Plug-ins

Hibernate's Level Two cache is a plug-in, and the following are several commonly used cache Plug-ins

1) EhCache: can be accessed as a process cache, the storage of physical media can be memory or hard disk, the Hibernate query cache provides support

2) Oscache: Can be a process-wide cache, the physical media to hold the data can make memory or hard disk, decent cache data expiration policy, hibernate query cache provides support

3) Swarmcache: can serve as a cluster-wide cache, but does not support hibernate query caching

4) Treecache: can serve as a cluster-wide cache, support transactional concurrency access policy, support for Hibernate query caching


Level Two Cache example

Configuration one:

Add in Hibernate.cfg.xml file

<!--turn on level two cache-->
<property name= "Cache.provider_class" >org.hibernate.cache.ehcacheprovider</ Property>
<!--enable query caching-->
<property name= "Hibernate.cache.use_query_cache" >true</ Property>

Configuration two:

The project src file creates a new Ehcache.xml file with the contents of

<?xml version= "1.0" encoding= "UTF-8"?>
<ehcache>
<diskstore path= "Java.io.tmpdir"/>
<defaultcache maxelementsinmemory= "10000" eternal= "false" overflowtodisk= "true" timetoidleseconds= "300" timetoliveseconds= "180" diskpersistent= "false" diskexpirythreadintervalseconds= "/>"

</ehcache>

Configuration three:

In order to cache objects of a class, the <cache usage= "Read-only"/> properties in its hbm file should be added, for example:

<?xml version= "1.0"?>
<! DOCTYPE hibernate-mapping Public "-//hibernate/hibernate mapping DTD 3.0//en"
"http://hibernate.sourceforge.net/ Hibernate-mapping-3.0.dtd ">
<!-- 
    mapping file autogenerated by myeclipse-hibernate Tools
-->

Configuration four:

in order to use the query cache, query must set cacheable to True,query.setcacheable (true);

For example, the method used for HQL queries in the DAO parent class is modified to:

/**
     * Query for executing HQL statement

     * @param SQL
     * @return
   /Public List executequery (String hql) {
        List list = new ArrayList ();
        Session session = Hibernatesessionfactory.currentsession ();
       Transaction tx = NULL;
        Query query = session.createquery (HQL);
      Query.setcacheable (true);
       try {
            tx = Session.begintransaction ();
            List = Query.list ();
           Tx.commit ();
        } catch (Exception ex) {
           ex.printstacktrace ();
            Hibernatesessionfactory.rollbacktransaction (TX);
            
        } finally {

            hibernatesessionfactory.closesession ();
       }
        return list;
   }

Add: When the object to be cached is in a cascading relationship. If an object that has a cascading relationship with him has attributes <cache usage= "Read-only"/> Then all objects in the object graph in which the object is held after the first get are saved to the Hibernate's level two cache, and the second time the object is All cascaded objects are found directly from the level two cache, and if one of the cascading objects does not have the <cache usage= "read-only"/> attribute, it is not saved to the level two cache, and the SQL is still executed at every get to find the cascading object in the database

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.