"Reprint" Cache Overview of the Java cache series and simple cache

Source: Internet
Author: User
Tags gemfire

Original address: http://www.blogjava.net/DLevin/archive/2013/10/15/404770.html

Pre-note: Recently the company is doing a project completely based on the cache (Gemfire) built a class database system, a small project of their own use of guava cache, previously done projects used Ehcache, since and the cache so predestined, Take this opportunity to look at the cache library in Java. Many cache libraries have been implemented in the Java community, with reference to Http://www.open-open.com/13.htm, There are only a few cache libraries that you can use, and these libraries are more representative: the cache provided in guava is a simple implementation based on a single JVM; Ehcache from Hibernate, also based on single JVM implementations, is a perfect implementation of single JVM cache While GemFire provides a complete implementation of the distributed cache. This series of articles focuses on the implementation of these cache systems, thus exploring the benefits of the cache, when to use the cache and so on, because they are memory-based caches, and thus are only limited to this type of cache (honestly, I don't know if there are any other cache systems, such as file-based? Embarrassing).

Remember the first time I touch the cache is the principle of computer composition in the university, because the CPU speed is much higher than the memory read speed, in order to improve the efficiency of the CPU, the CPU will provide the buffer internally, the buffer read speed and CPU processing speed is similar, the CPU can read data directly from the buffer, This solves the problem of CPU processing speed and memory reading speed mismatch. The reason the cache solves this problem is based on the local principle of the program, that is, "the program executes in a local law, that is, in a period of time, the entire program is only a part of the program." Accordingly, the storage space accessed by the execution is also limited to a certain area of memory. The principle of locality is also manifested as: Time locality and spatial locality. Temporal locality means that if an instruction in a program executes, it may be executed again shortly, and if the data is accessed, it may be accessed again shortly thereafter. Spatial locality means that once a program accesses a storage unit, it is not long after. The storage unit near it will also be accessed. "In practice, the CPU reads the data into the buffers first, and if the buffers already exist, reads the data in the cache (hits), otherwise (fails), loads the corresponding chunks in memory into the cache to increase the subsequent access speed." Due to the limitations of the cost and CPU size, the CPU can only provide a limited buffer, so the size of the buffer is one of the important metrics to measure CPU performance.

With caching, the CPU needs to handle an issue when it updates data to memory (writeback policy issues), where the CPU updates the data only when the cached data is updated (write back, writeback, and when the cache needs to be replaced, the updated value in the cache is written back to memory). The CPU also updates the cached and in-memory data (write through, writes) while the data is being updated. In a writeback strategy, in order to reduce memory write operations, the cache block typically has a dirty bit (dirty bit) that identifies whether the block has been updated since it was loaded. If a cache block is never written before being swapped back into memory, the write-back can be avoided, and the advantage of writeback is that it saves a lot of write operations. This is mainly due to the fact that updates to different units within a block can be done in a single write operation. This savings in memory bandwidth further reduces energy consumption and is therefore quite suitable for embedded systems. The write-through strategy is less performance due to frequent interaction with memory (some CPU designs provide write buffers in the middle to mitigate performance), but it is simple and simple to maintain data consistency.

In the software cache system, it is generally to solve the memory access rate and disk, network, database (belonging to disk or network access, separately listed because it is widely used) and other access rate mismatch problems (for the memory cache system). However, due to the size and cost of memory, we can not load all the data into the memory first. Thus, as in the CPU cache, we can only store part of the data in the cache first. At this point, for the cache, we generally need to address the following requirements:

  1. Reads the value from the cache with the given key. The CPU uses the memory address to locate the memory that has been obtained in the corresponding memory, similar to the software cache, which needs to identify the relevant value by a key value. So it is simple to think that the cache in the software is a map that stores key-value pairs, such as the region in GemFire inherits from the map, but the cache implementation is more complex.
  2. When a given key does not exist in the current cache, the programmer can load the value value of the key from another source (such as a database, a network, etc.) by specifying the appropriate logic and return it. In the CPU, based on the principle of program locality, generally is the default load of the next block of memory, but in the software, different requirements have different loading logic, it requires the user to specify the corresponding load logic, and generally difficult to predict the next to read the data, So you can only load one record at a time (for predictable scenarios, it's possible to load data in bulk, but at this point you need to weigh the response time of the current operation).
  3. You can write Key-value key-value pairs (new records or updates to existing key-value pairs) to the cache. As with the write-back and write-through strategies in the CPU's writeback strategy, some cache systems provide a write-through interface. If no write-through interface is provided, the programmer needs additional logic to handle the write-through policy. Can also be like the cache in the CPU, only if the corresponding key value to move out of the cache, and then write the value back to the data source, you can provide a marker bit to decide whether to write back (but the sense that this implementation is more complex, the coupling of the code is also relatively high, if the speed of write-up, using asynchronous writeback can To prevent data loss, you can use a queue to store it.
  4. Moves the key value pair of the given key out of the cache (or given multiple keys to bulk remove or even clear the entire cache).
  5. Configure the maximum usage of the cache and configure an overflow policy when the cache exceeds the usage rate
    1. Directly removes the overflow key-value pair. Determines whether to write back the updated data to the data source at the time of removal.
    2. Writes overflow-overflow key-value pairs to disk. When writing a disk, you need to solve how to serialize key-value pairs, how to store serialized data to disk, how to layout disk storage, how to solve disk fragmentation problems, how to retrieve the corresponding key-value pairs from the disk, how to read the data in the disk to serialize, and how to deal with problems such as disk overflow.
    3. In the overflow policy, in addition to how to handle the overflow key-value pair problem, you also need to handle how to select the overflow key-value pair problem. This is a bit like the memory of the page replacement algorithm (in fact, memory can also be seen as the cache of the disk), the general use of the algorithm is: first-out (FIFO), least recently Used (LRU), minimum use (LFU), clock replacement (class LRU), working set and other algorithms.
  6. For key-value pairs in the cache, you can configure their lifetime to handle certain key-value pairs that have not been used for a long time, but have not been able to overflow (because of the choice of overflow policy or the cache has not reached the overflow stage) to free up memory early.
  7. For some specific key-value pairs, we want it to remain in memory without being overrun, and some cache systems provide pin configuration (dynamic or static) to ensure that the key-value pair is not overrun.
  8. Provides statistics such as cache status, hit ratio, such as disk size, cache size, average query time, number of queries per second, number of memory hits, number of disk hits, and more.
  9. Events such as cache creation, cache destruction, addition of a key-value pair, update of a key-value pair, overflow of key-value pairs, etc. are provided to register the cache-related event handlers.
  10. Because the purpose of the cache is to improve the read and write performance of the program, and the general cache needs to work in a multithreaded environment, it is generally necessary to ensure thread safety and provide efficient read and write performance when implemented.

In Java, map is the simplest cache, for efficient use in multithreaded environments, you can use Concurrenthashmap, which is the first implementation of a project I was involved in (and later introduced Ehcache). To make the semantics clearer and keep the interface simple, I implemented a map-based, simplest cache system to demonstrate the basic use of the cache. Users can provide it with data, query data, determine the existence of a given key, remove a given key (s), erase the entire cache, and so on. The following is the interface definition for the cache.

1  Public InterfaceCache<k, v> {2      PublicString getName ();3      PublicV get (K key);4      Publicmap<?extendsK?extendsV> getAll (iterator<?extendsK>keys);5      Public BooleanisPresent (K key);6      Public voidput (K key, V value);7      Public voidPutall (map<?extendsK?extendsV>entries);8      Public voidinvalidate (K key);9      Public voidInvalidateall (iterator<?extendsK>keys);Ten      Public voidInvalidateall (); One      Public BooleanisEmpty (); A      Public intsize (); -      Public voidClear (); -      Publicmap<?extendsK?extendsV>Asmap (); the}

This simple cache implementation is just the encapsulation of hashmap, the reason for choosing HashMap instead of Concurrenthashmap is because the GetAll () method cannot be implemented in Concurrenthashmap, and all operations here are locked. Thus, there is no need for concurrenthashmap to ensure thread safety, and I use a read-write lock to improve the performance of the concurrency query in order to improve performance. Because the code is relatively simple, so all the code is pasted (lazy to tidy up ...) )。

 Public classCacheimpl<k, v>ImplementsCache<k, v> {    Private FinalString name; Private FinalHashmap<k, v>Cache; Private FinalReadwritelock lock =NewReentrantreadwritelock (); Private FinalLock Readlock =Lock.readlock (); Private FinalLock Writelock =Lock.writelock ();  PublicCacheimpl (String name) { This. Name =name; Cache=NewHashmap<k, v>(); }         PublicCacheimpl (String name,intinitialcapacity) {         This. Name =name; Cache=NewHashmap<k, v>(initialcapacity); }         PublicString GetName () {returnname; }     PublicV get (K key) {Readlock.lock (); Try {            returnCache.get (key); } finally{readlock.unlock (); }    }     Publicmap<?extendsK?extendsV> getAll (iterator<?extendsK>keys)        {Readlock.lock (); Try{Map<k, v> map =NewHashmap<k, v>(); List<K> Noentrykeys =NewArraylist<k>();  while(Keys.hasnext ()) {K key=Keys.next (); if(IsPresent (key)) {Map.put (key, Cache.get (key)); } Else{noentrykeys.add (key); }            }                        if(!Noentrykeys.isempty ()) {                Throw NewCacheentriesnotexistexception ( This, Noentrykeys); }                        returnmap; } finally{readlock.unlock (); }    }     Public BooleanisPresent (K key) {Readlock.lock (); Try {            returnCache.containskey (key); } finally{readlock.unlock (); }    }     Public voidput (K key, V value) {Writelock.lock (); Try{cache.put (key, value); } finally{writelock.unlock (); }    }     Public voidPutall (map<?extendsK?extendsV>entries)        {Writelock.lock (); Try{cache.putall (entries); } finally{writelock.unlock (); }    }     Public voidinvalidate (K key) {Writelock.lock (); Try {            if(!isPresent (Key)) {                Throw NewCacheentrynotexistsexception ( This, key);        } cache.remove (key); } finally{writelock.unlock (); }    }     Public voidInvalidateall (iterator<?extendsK>keys)        {Writelock.lock (); Try{List<K> Noentrykeys =NewArraylist<k>();  while(Keys.hasnext ()) {K key=Keys.next (); if(!isPresent (Key))                {Noentrykeys.add (key); }            }            if(!Noentrykeys.isempty ()) {                Throw NewCacheentriesnotexistexception ( This, Noentrykeys); }                         while(Keys.hasnext ()) {K key=Keys.next ();            Invalidate (key); }        } finally{writelock.unlock (); }    }     Public voidInvalidateall () {writelock.lock (); Try{cache.clear (); } finally{writelock.unlock (); }    }     Public intsize () {readlock.lock (); Try {            returncache.size (); } finally{readlock.unlock (); }    }     Public voidClear () {writelock.lock (); Try{cache.clear (); } finally{writelock.unlock (); }    }     Publicmap<?extendsK?extendsV>Asmap () {readlock.lock (); Try {            return NewConcurrenthashmap<k, v>(cache); } finally{readlock.unlock (); }    }     Public BooleanIsEmpty () {readlock.lock (); Try {            returnCache.isempty (); } finally{readlock.unlock (); }    }}

Its simple use case is as follows:

1 @Test2      Public voidTestcachesimpleusage () {3Book UML =bookfactory.createumldistilled ();4Book derivatives =bookfactory.createderivatives ();5         6String UMLBOOKISBN =UML.GETISBN ();7String DERIVATIVESBOOKISBN =DERIVATIVES.GETISBN ();8         9cache<string, book> cache = cachefactory.create ("Book-cache");Ten cache.put (UMLBOOKISBN, UML); One cache.put (DERIVATIVESBOOKISBN, derivatives); A          -Book FETCHEDBACKUML =Cache.get (UMLBOOKISBN); - System.out.println (FETCHEDBACKUML); the          -Book Fetchedbackderivatives =Cache.get (DERIVATIVESBOOKISBN); - System.out.println (fetchedbackderivatives); -}

"Reprint" The cache overview of the Java cache series and simple cache

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.