Primary cache Two-level cache distribution Cache page Cache main differences __session

Source: Internet
Author: User
Tags memcached set time

1. First-level caching is the form of storing data in a local cache (low operating hit): Scope of interrelated persistent object caches: Transaction scope, each transaction has a separate first-level cache concurrency Access policy: Because each transaction has a separate first-level cache without concurrency problems, there is no need to provide concurrent access policies Data Expiration policy: objects in the first cache will never expire unless the application shows emptying or emptying specific object physical media: Memory Cache Software Implementation: the implementation of the hibernate session includes caching-enabled methods: As long as the session interface is executed to save, update, Delete, load, query, hibernate will enable the first-level cache, for bulk operations, if you do not want to enable a first-level cache, directly through the JDBCAPI to perform the management of the cache: the primary cache of physical media is memory, due to the limited capacity of memory, The number of loaded objects must be restricted through appropriate retrieval strategies and retrieval methods, and the session's Evit (object obj) method can display a specific object in the empty cache, clear (): Empty all persisted objects in the cache.
2. Level Two caching is the form of data stored in the global cache (high percentage of operations): object's bulk data cache scope: Process scope or cluster scope, caching is shared concurrently access policy by all transactions within the same process or cluster scope: Because multiple transactions simultaneously access the same data in the level two cache, Therefore, you must provide an appropriate concurrency access policy, to guarantee a specific transaction isolation level data expiration policy: You must provide a data expiration policy, such as the maximum number of objects in a memory-based cache, the longest time that an object can be cached, and the maximum amount of free time that an object is allowed to be in the cache physical media: Memory and hard disk, The bulk data of the object is first deposited into the memory based cache, and when the number of objects in memory reaches the maxelementsinmemory value of the data expiration policy, the remainder of the object is written to the disk-based cache software implementation: Provided by a third party, hibernate only provides the cache adapter , which is used to integrate specific cache plug-ins into the hibernate to enable caching: A user can then configure a second-level cache on the granularity of a single collection of classes or classes, and if instances of the class are often read but rarely modified, consider using level two caching, and only a level two cache is configured for a class or collection. Hibernate at run time to add its instance to the level two cache to manage caching: the physical media of level two cache can make memory and hard disk, so the second level cache can hold large capacity data, and the Maxelementsinmemory property of the data expiration policy can control the memory Like number, the management level two cache mainly includes two aspects: Select the persistence class that needs to use the second level cache, set the appropriate concurrency access policy, select the cache adapter, set the appropriate data expiration policy. Session provides two ways to manage caching for an application: Evict (Object obj): Clears the persisted object specified by the parameter from the cache. Clear (): Clears all persisted objects in the cache.
(1) What data is suitable for storage in the secondary cache.       rarely modified data      is not very important data, allowing occasional concurrent data        Data       reference data that will not be accessed concurrently refers to the supplied reference constant data with a limited number of instances, and instances of it are referenced by instances of many other classes, rarely or never modified. (2) commonly used cache Plug-ins Hibernater level two cache plug-ins are as follows EhCache: can be as a process-wide cache, the physical media storage data can be memory or hard disk, the Hibernate query cache provides support.       Oscache: As a process-wide cache, the physical media that holds the data can be memory or a hard disk, providing a rich cache expiration policy and support for Hibernate query caching.       Swarmcache: Can serve as a cluster-wide cache, but does not support hibernate query caching.           Jbosscache: Serves as a cluster-wide cache, supports transactional concurrency access policies, and supports Hibernate query caching.       memcached: A high-performance distributed Memory object caching system for dynamic Web applications to reduce database load. It reduces the number of read databases by caching data and objects in memory, thereby increasing the speed of dynamic, database-driven Web sites      Redis: a key-value storage System. Like memcached, it supports a relatively larger number of stored value types, including string (string), list (linked list), set (set), Zset (sorted set-ordered set), and hash (hash type) 3. Distributed caching (small memory footprint, Pressure resistance) available for all layers 3.1 distributed caching has the following characteristics: High performance: When traditional databases face large-scale data access, disk I/O often becomes a performance bottleneck, resulting in excessive response latency. Distributed caching uses high speed memory as the storage medium for data objects toKey/value Form Storage, it is ideal to obtain the DRAM level of reading and writing performance; Dynamic Extensibility: Supports flexible scaling to provide predictable performance and scalability by dynamically increasing or reducing the data access load that the node is responding to change, while maximizing the utilization of resources; High availability: Availability includes both data availability and service availability. High availability based on redundancy mechanism, no single point of failure (failure), support for automatic discovery of faults, transparent implementation of failover, No caching service interruption or data loss due to server failure. Automatically balance data partitioning when dynamically expanding, while ensuring the continuous availability of caching services; Ease of Use: Provides a single view of data and management; The API interface is simple and has nothing to do with topology, the dynamic expansion or failure recovery without human configuration, automatic selection of backup nodes, most caching system provides a graphical management console, easy to unified maintenance; Distributed Code Execution (distributed code Execution): Transfer the task code to the data nodes to execute in parallel, and the client aggregation returns the result, thus effectively avoiding the movement and transmission of the cached data. Latest Java Data grid specification JSR-347 has added distributed code execution and Map/reduce API support to mainstream distributed cache products such as IBM WebSphere EXtreme scale,vmware gemfire,gigaspaces XAP and Red Hat Infinispan also supports this new programming model.
3.2 Typical scenarios for distributed caching can be grouped into the following categories: Page caching. A fragment of content used to cache Web pages, including HTML, CSS, and images, for social networking sites; Application object caching. The caching system serves as a level two cache of ORM framework to reduce the load pressure of the database and accelerate the application access; State caching. The cache includes session state and state data when applied horizontally, such data is generally difficult to recover, high availability requirements, and many applications in high availability clusters; Parallel processing. Usually involves a large number of intermediate calculation results need to be shared; Event handling. Distributed caching provides continuous query (continuous query) processing technology for event flow to meet real-time demand; Extreme transaction Processing. Distributed caching provides a high throughput and low latency solution for transactional applications, supports high concurrent transaction request processing, and is applied to railways, financial services and telecommunications.
3.3 High-performance Distributed caching framework Ehcache–java distributed caching Framework cacheonix– High performance Java distributed caching system asimplecache– lightweight android Cache Framework JBoss cache– Java mitigation based on things Storage Framework voldemort– cache framework based on key-value (Key-value) 4. Page caching (oscached) is similar to map because the storage format is a k,v form of K:url V: page (html,jsp) when a user first accesses a Web site because it is not in the cache, it To request and load, feedback to the user will exist before the cache, automatically spliced into static pages, and the next time the user access to quickly from the cache to the user. Prevents frequent user refreshes from affecting server performance. Advantages: Best Performance Disadvantage: Data may not be resolved synchronously: Set time to synchronize the database
Designed by Opensymphony, Oscache is a groundbreaking JSP custom tag application that provides features for fast memory buffering within existing JSP pages (1) Caching any object, you can cache portions of JSP pages or HTTP requests without restrictions, Any Java object can be cached. (2) Having a comprehensive Api--oscache API gives you a comprehensive program to control all Oscache features. (3) Permanent Cache-caching can be arbitrarily written to the hard disk, thus allowing expensive creation (expensive-to-create) data to keep the cache, or even to restart the application. (4) Support cluster--the cluster cache data can be configured by a single parameter, without the need to modify the code. (5) Expiration of cached records-you can have maximum control over the expiration of cached objects, including pluggable refresh policies (if default performance is not required)
Part of the content of learning online pick, share out for everyone to study together

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.