Oracle Coherence Chinese tutorial 15: Serializing Paged Cache

Source: Internet
Author: User
Tags failover serialization

Serialization of paged Cache

This chapter provides a large amount of the off-heap cache information for binary data.

This chapter contains the following sections:
Understanding Serialization Paged Cache
Paging Cache configuration Serialization
Cache Service for optimized partitioning
Configuring High Availability
Configure load Balancing and failover
Support for huge caches

15.1Understanding Serialization Paged Cache

Coherence provides highly efficient and large automatic expiration data using potentially high latency storage mechanisms such as clear support for disk file caching. Its advantages include supporting a larger data set that can be managed in memory while preserving a valid termination mechanism for managing the timing (related to the automatic release of resource management) that data. The best usage scenarios include many large object storage capabilities, rarely accessing XML documents or content, or tolerating a higher latency if the cached data has been paged to disk for access. See chapter VI, " implementing storage and backup maps." "

The serialization paging cache is defined as follows:

Serialization means that objects stored in the cache are serialized and stored in a binary store ; reference the existing serialization map feature and the sequence cache.

Paging means that the objects stored in the cache are segmented management efficiency.

Caching means that there can be a limit on the size of the specified cache, in which case the maximum number of pre-expiration pages that are limited to the cache of concurrent pages, starting with the oldest page.

The result is a feature that organizes data in the cache where the data is placed in the cache and then effectively from the cache data, the entire page is based on a time expiration time, usually without reloading any data from the disk.

15.2Configuring the serialization paging cache

The serialized master configuration paging cache is made up of two parameters: a cache-managed page, a page that is a valid length of time number. For example, in a day's cache of data, the cache can be configured for each hour of -page, or thepage, each time theminutes, and so forth.

The data in each page cache is managed by a single binary storage. Caching requires the binary Storage Manager, which provides the means to create and destroy these binary stores.Coherenceprovides two dollar store manager for all built-in binary store implementations, including BerkeleyDB(abbreviation"BDB") and variousNIOimplementation.

Serialize the paging cache in the configuration file in the cache<external-scheme>and the<paged-external-scheme>the element. See"External Plan"and the"Paging External Plan".

15.3Cache service for optimized partitioning

Coherence provides an optimized partition of the cache service, because - when it is used to back up partitions the cache is stored in any serialized map and the data is in full binary form. This is called the two-dollar map optimization, and when enabled, it gives the serialization of the map, serializes the cache, and assumes that all data that is saved in the cache is binary serialized page cache permissions. The result of this optimization is a lower CPU and memory utilization, and a slightly higher performance. See <external-scheme> and <paged-external-scheme> cache configuration Elements

15.4Configuring High Availability

It also provides explicit support for the high availability feature of the cache service in the serialization paging cache partition, which is configured by providing a configuration that can be used for primary storage data and for storing data for backup. The configuration of the backup store is known as the passive mode, as it does not actively expire data from its storage, but rather reflects the expiration of the first-level cache storage that occurs. When using high-availability data partitioning for the Caching service feature (Backup number 1 or greater than 1 , default is 1 ), and paged cache serialization as the primary backup storage service, This is a best practice you can also use serialization paging Cache as a backup store, and configure backup versus passive selection. See Cache configuration <paged-external-scheme> elements.

15.5Configure load Balancing and failover

When using the distributed cache service, special consideration should be given to load balancing and fault tolerance. The partition count parameter of the distributed cache service should be set higher than normal if the amount of data in the cache is very large. The high partition number breaks the overall cache, load balancing and recovery processing due to failover into small chunks. For example, if the cache is expected to become a terabyte , the 20,000 partition breaks the cache by an average of about 50MB . If a unit (partition size) is too large, it causes cache load Balancing ?? when the memory condition. (Remember, to ensure that the number of splits is prime ;) See http://primes.utm.edu/lists/small/ List of prime numbers that you can use)

15.6support for huge caches

To support the massive cache expiration data (for example,terabytesmegabytes), the Expiration processing daemon does not have a cache processing interrupt at the same time. As a result, many thousands or millions of objects can exist in a single cache page, they can expire asynchronously, thus avoiding any service interruption. The daemon thread is an option that is enabled by default, but it can be disabled. See<external-scheme>and the<paged-external-scheme>The cache configuration element.

When the cache is used for large amounts of data, these pages are typical disk backups. Because the cache eventually expires on every page, freeing up disk resources, the cache will use virtual erase optimizations by default. Data that is explicitly deleted or expired from the cache is not actually removed from the underlying binary store, but when a page (binary storage) is completely emptied, it is all erased. This reduces theI/O, especially at the expiration of the period of processing and operation, such as load balancing, the introduction of a large number of data in the cluster. This optimizes the cost of disk files (such as a disk-based binary storage"option), is often greater than the data they manage, which means that because disk space is considered to be a relatively inexpensive other factor, such as response time, Virtual erase optimization is enabled by default, but it can be disabled. Assuming a level of attention, disk space is typically assigned locally to each server, therebyTB-level cache partitions more than hundred servers will use only the disk space of each server (10GBprimary storage and backup storage10GB,20GBback to left. )

Oracle Coherence Chinese tutorial 15: Serializing Paged Cache

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.