Hibernate, in addition to automatically first-level caching of Session-level transactions, Two-level caches need to implement the Org.hibernate.cache.CacheProvider interface, Hibernate has implemented a number of caches, developers can be directly configured to use, while enabling level two cache, configuration Hibernate.cache.use_ Second_level_cache is true.
Optional values:
Org.hibernate.cache.HashtableCacheProvide Org.hibernate.cache.EhCacheProvider Org.hibernate.cache.OSCacheProvider Org.hibernate.cache.SwarmCacheProvider Org.hibernate.cache.TreeCacheProvider ...
For example:
(1). Hibernate.cfg.xml
XML code <property name= "Hibernate.cache.provider_class" >org.hibernate.cache.EhCacheProvider</property> <property name= "Hibernate.cache.use_second_level_cache" >true</property>
(2) Spring
XML code <prop key= "Hibernate.cache.provider_class" >org.hibernate.cache.EhCacheProvider</prop> <prop key= "Hibernate.cache.use_second_level_cache" >true</prop>
Hibernate does not cache all entity objects by default, so we need to specify which objects to cache, and in the mapping file of the entity object (within the corresponding <class> tag), add the following configuration:
<cache usage= "Read-only"/> or add the following configuration in Hibernate.cfg.xml:
<class-cache class= "Com.xxx.hibernate.XXClass" usage= "Read-only"/>
Usage= "Read-only" is a "read only" cache policy.
Note that this <cache> tag can only be placed inside the <class> tag, and must be in front of the <id> tag.
The hibernate level two cache is a sessionfactory cache that allows multiple sessions to be shared, using a third-party cache component, and the new Hibernate Ehcache as the default two-level cache implementation.
Cache synchronization Policy: The cache synchronization policy determines the access rules for the data objects in the cache, and we must specify the appropriate cache synchronization policy for each entity class. There are 4 different cache synchronization strategies available in Hibernate:
1.read-only: Read only. For data that does not change, it can be used.
2.nonstrict-read-write: If the program does not require strict data synchronization under concurrent access, and the data update frequency is low, the use of this cache synchronization strategy can achieve better performance.
3.read-write: Strict read/write caching. The "Read Committed" transaction isolation level is implemented based on the time stamp decision mechanism. Used for data synchronization requirements, but not distributed cache, the most used cache synchronization policy in real-world applications.
4.transactional: Transactional caching, which must be run in a JTA transaction environment. In this cache, the related operations of the cache are added to the transaction (this cache resembles a memory database), such as a transaction failure, and the data of the buffer pool is rolled back to the state before the start of the transaction. Transactional caching implements the " REPEATABLE read "Transaction isolation level, effectively guarantees the legitimacy of data, adapt to the cache of critical data, hibernate built-in cache, only Jbosscache support transactional cache.
Various cache implementations
Hibernate.cache.use_minimal_puts: Whether to optimize level two cache to minimize read and write operations, cache optimization at cluster time. Optional value: True (default): Enables minimized read and write operations. False: Disables the minimization of read and write operations. Hibernate.cache.use_query_cache: Whether to cache query results (when conditional queries are used). Optional value: true: Caches query results. False: Do not cache query result Hibernate.cache.use_second_level_cache: No two level cache is enabled. Optional value: True: Enable level two caching. False: No level two cache is used. Hibernate.cache.query_cache_factory: Sets the full name of the custom query cache class, the cache class must implement the Org.hibernate.cache.QueryCache interface. Optional values: (1) org.hibernate.cache.StandardQueryCache (default). (2) Custom cache implementation class. Hibernate.cache.region_prefix: Prefix name for level two cache. Hibernate.cache.use_structured_entries: Whether to cache objects in a structured way. Optional value: true: Structured caching of objects. False: The object is not cached in a structured way.
Attached: echcache.xml
XML code <?xml version= "1.0" encoding= "UTF-8"?> < ehcache> <diskstore path= "Java.io.tmpdir"/> <defaultCache Maxelementsinmemory= "10000" eternal= "false" timetoidleseconds= timetoliveseconds= " " overflowtodisk= "true"/> <cache name= "Org.hibernate.cache.StandardQueryCache" Maxelementsinmemory= "10000" &NBSP;&NBSP;&NBsp; eternal= " False " timetoidleseconds= " " timetoliveseconds= " " overflowtodisk= "true"/> <cache name = "Org.hibernate.cache.UpdateTimestampsCache" maxelementsinmemory = eternal = "true" Overflowtodisk = "true"/> </ehcache>
The Maxelementsinmemory property is used to specify the maximum number of objects that can be placed in the cache. The Eternal property specifies whether the cache is permanently valid. The Timetoidleseconds property specifies how long the cache is idle and cleans up when it is unused. The Timetoliveseconds property specifies the lifetime length of the cache. The Diskpersistent property specifies whether the cache is persisted to the hard disk, and the save path is specified by the <diskStore> tag.
When testing, log4j.properties
Java Code Log4j.logger.org.hibernate=warn log4j.logger.org.hibernate.cache=debug
Batch processing:
Since hibernate has different management mechanisms for both caches, we can configure the size of the cache for level two, and Hibernate takes a "laissez-faire" attitude towards the internal cache, with no limit to its capacity. Now the crux of the problem has been found, we do massive data insertion, the generation of so many objects will be included in the internal cache (internal cache is cached in memory), so that your system memory will be 1.1 points are eaten, if the final system was squeezed "fried", it is reasonable.
Let's think about how to deal with this problem better. Some of the development conditions must be processed using Hibernate, of course, some projects are more flexible, you can seek other methods.
Two methods are recommended here:
(1): Optimize hibernate, the application of segmented insert in a timely way to clear the cache.
(2): Bypass Hibernate API, directly through the JDBC API to do bulk insertion, this method is the best performance, but also the fastest.
For the above Method 1, its basic idea is: To optimize hibernate, set the Hibernate.jdbc.batch_size parameter in the configuration file, to specify the number of each commit SQL, the program using segmented insert to clear the cache in a timely manner ( The session implements the asynchronous Write-behind, which allows hibernate to explicitly write the batch of operations, which means that every time a certain amount of data is inserted, it is promptly purged from the internal cache, freeing up the memory consumed.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.