Cache is the caching, it is often the most important means to improve the performance of the system, the data to play a reservoir and buffering role. The cache is especially important for systems that rely heavily on data-read operations. In the case of large concurrency, if each program needs to do a query to the database directly, the performance overhead is obvious, frequent network and read and write operation of the database disk will greatly reduce the performance of the system. At this point, if you can have the database in local memory to retain a mirror, the next time you visit only need to get directly from memory, then obviously can bring a small performance increase. The difficulty of introducing the cache mechanism is how to ensure the validity of the data in memory, otherwise the appearance of dirty data will bring unpredictable and serious consequences to the system. While a well-designed application can show the performance of a given person without caching, there is no doubt that some applications with higher requirements for read operations can achieve higher performance through the cache. For an application, the cache stores the current state of the data in the database through memory or disk, which is a locally stored data backup. The cache is located between the database and the application, updates the data from the database, and provides data to the program.
hibernate implements a good cache mechanism that can quickly improve the data read performance of the system with the help of the cache inside hibernate. The cache in hibernate can be divided into two tiers: first-level cache and level two cache.
First-level cache:
the session implements the first level cache, which belongs to the transaction-level data buffer. Once the transaction is over, the cache also fails. The life cycle of a session corresponds to a database transaction or a program transaction.
Session-cache guarantees that two times in a Session when the same object is requested, the object obtained is the same Java instance, and sometimes it avoids unnecessary data conflicts. In addition, it provides a guarantee for some other important performance:
1: The stack overflow is not generated when a self-cyclic reference is made to an object.
2: When the database transaction ends, there is no data conflict for the same database row because there is at most one object to represent it for a row in the database.
3: There may be a lot of processing units in a transaction, and the operations in each processing unit are immediately known to the other processing unit.
we do not have to open session-cache, it is always open and cannot be closed. When you use Save (), update () or saveorupdate () to save data changes, or to get objects by means of load (), find (), list (), the object is added to the Session-cache.
If you want to synchronize a lot of data objects, you need to effectively manage the cache, you can use the session's evict () method to remove objects from the first-level cache. as follows:Session session = Hibernateutil.currentsession ();
Transaction tx = Session.begintransaction ();
for(inti = 0; I <100000; i++)
{
Student stu = new Student ();
Session.save (Stu);
}
Tx.commit ();
Session.close ();when you save 50,000 or more objects, the program may throw a OutOfMemoryException exception because hibernate caches all newly added objects in the first level cache. Memory overflow. To solve this problem, you need to set the number of JDBC batches to a reasonable value (typically 10~20). The following properties can be added to hibernate's configuration file
< Propertyname= "Hibernate.jdbc.batch_size">20</ Property>
we then submit and update the session cache at some point in the program:Session session = Hibernateutil.currentsession ();
Transaction tx = Session.begintransaction ();
for(inti = 0; I <100000; i++)
{
Student stu =NewStudent ();
Session.save (Stu);
if(i%20 = = 0)//after each 20 objects are saved, do the following
{
Session.flush (); // This will submit the update
Session.clear (); // Clear Cache, free memory
}
}
Tx.commit ();
Session.close ();
Level Two cache
The level two cache is cached in the Sessionfactory range, with all sessions sharing the same level two cache. Saves the data in bulk form of the persistence instance in level two cache. The internal implementation of the level two cache is not important, but it is important to adopt the correct caching strategy and which cache provider to use. Persisting different data requires different cache policies, such as factors that affect the choice of the cache policy: the read/write ratio of the data, whether the data table can be accessed by other applications, and so on. For some data with high read/write ratio can open its cache, allow these data to enter the level two cache container is advantageous to the system performance optimization, and for the data object which can be accessed by other application, it is best to close the level two cache option of this object.
Setting the hibernate level two cache takes two steps: First confirm what data concurrency policy is used, and then configure the cache expiration time and set the caching provider.
There are 4 built-in Hibernate data concurrency conflict policies that represent the database isolation level, as follows:
1:transactions (Transaction)Available only in managed environments. It guarantees a stress-sensitive transaction isolation level, which can be used for high read/write ratios and rarely updated data.
2:Read and write (Read-write)Use the timestamp mechanism to maintain the read-write COMMIT TRANSACTION isolation level. This policy can be used for data that is high in read/write proportions and rarely updated.
3:non-Strictly read and write (Notstrict-read-write)Database consistency between the cache and the database is not guaranteed. When you use this policy, you should set sufficient cache expiration time, or you might read dirty data from the cache. This strategy can be used when some data is rarely changed and when the data and database have a small amount of impact.
4:Read Only (read-only)You can use this policy when you are sure that the data will never change.
Once we have identified the cache policy, we need to pick an efficient cache provider that will be called by Hibernate as a plugin. Hibernate allows the following cache plugins to be used:
EhCache: Can be used as a simple process-wide cache in the JVM, which can put cached data into memory or disk, and support the optional query cache in hibernate.
Opensymphony Oscache: Similar to Ehcache and provides a rich cache expiration policy.
Swarmcache: Can be used as a cluster-wide cache, but query caching is not supported.
JBossCache: Can be buffered as a cluster-wide, but query caching is not supported.
Using Ehcache in Hibernate
Ehcache is a pure Java program that can be introduced as a plugin in hibernate. The use of Ehcache in Hibernate needs to be set in Hibernate's configuration file as follows:
<properyname= "Hibernate.cache.provider_class">
Org.hibernate.cache.EhCacheProvider
</ Property>The Ehcacheprovider class is located in the Hibernate3.jar package instead of the Ehcache-1.1.jar package. Ehcache has its own configuration document, named Chcache.xml. The ETC directory in hibernate3.x has the Ehcache.xml sample file, only need to copy it to our application src directory (compiled ehcache.xmlcopy to web-inf/classes directory). Make changes to the relevant values in them to suit your program. After configuration, the code in the Ehcache.xml file is as follows:<Ehcache>
<DiskstorePath= "C:\\cache"/>Set Cache.data file location
<Defaultcache
maxelementsinmemory= "10000"Maximum number of objects allowed to be created in the cache
Eternal= "false"//Whether the object in the cache is permanent
Timetoidleseconds= "+"//Cache data passivation time (that is, the object's idle time before it expires)
Timetoliveseconds= "+"//Cache data Time-to-live (that is, the lifetime of the object before it expires)
Overflowtodisk= "true"
/>
<Cachename= "Student"//user-defined cache configuration
Maxelementsinmemory= "10000"
Eternal= "false"
Timetoidleseconds= "+"
Timetoliveseconds= "All"
Overflowtodisk= "true"
/>
</Ehcache>In addition, we need to configure it in the mapping file of the persistence class. For example, Group (Class) and student (student) are one-to-many relationships, and their corresponding data tables are T_group and t_student respectively. Now it's time to cache the student class's data in level two, which requires a two-level cache to be configured in two mapping files.
in Group.hbm.xml, the following
add in their <set></set><Cacheusage= "Read-write"/><!--the data in the collection is cached -Although the above file is set in the <set> mark <cache usage= "Read-write"/>However, Hibernate simply adds the group-related student primary key ID to the cache, and if you want to add the entire student bulk attribute to the level two cache, you also need to Student.hbm.xml file <class> Tags add <cache> sub tags. as follows:<classname= "Student"Table= "T_student">
<Cacheusage= "Read-write" /><!--the cache tag must be followed by the class tag -
</class>
Hibernate's Cache