This blog reference from: http://ifeve.com/google-guava-cachesexplained/
Guava's maven dependency:
<dependency>
<groupId>com.google.guava</groupId>
<artifactid>guava</ artifactid>
<version>14.0-rc2</version>
</dependency>
Guava Cache Application Scenario:
It is quite useful to be slow in many scenarios. For example, the cost of calculating or retrieving a value is high, and you should consider using caching when you need to get a value more than once for the same input.
Guava cache is similar to Concurrentmap, but it is not exactly the same. The basic difference is that Concurrentmap will always save all the added elements until you explicitly remove them. In contrast, guava cache is usually set to automatically recycle elements in order to limit memory consumption. In some scenarios, although Loadingcache does not recycle elements, it is also useful because it automatically loads the cache.
Generally, the guava cache applies To:
You are willing to consume some memory space to improve speed.
You anticipate that some keys will be queried more than once.
The total amount of data stored in the cache does not exceed the memory capacity. (The guava cache is a local caching for a single application runtime.) It does not deposit data to a file or to an external server. If this does not meet your needs, please try to memcached this type of tool.
If your scene fits each of the above, guava cache is right for you.
Note: If you don't need the features in the cache, use Concurrenthashmap for better memory efficiency--but most of the cache's features are hard to replicate based on the old Concurrentmap, or even impossible. Loading
Guava Cache provides two ways to load: (1) Cacheloader. (2) Passing in a callable instance when calling get.
(1) Cacheloader applies To: There is a reasonable default method to load or calculate the value associated with the key
(2) Callable method applies To: There is no reasonable default method to load or compute the value associated with the key, or you want to overwrite the default add-ins, while preserving the "get cache-if not-then compute" [Get-if-absent-compute] Atomic semantics
The sample code for Cacheloader loading is as follows:
Loadingcache<key, graph> graphs = Cachebuilder.newbuilder ()
. MaximumSize (1000)
. Build (
New Cacheloader<key, graph> () {public
Graph load (key key) throws Anyexception {return
createexpensivegraph ( key);
}
);
...
try {return
graphs.get (key);
} catch (Executionexception e) {
throw new Otherexception (E.getcause ()); c13/>}
GetAll (iterable
Loadingcache<key, graph> graphs = Cachebuilder.newbuilder ()
. expireafteraccess (timeunit.minutes)
. Build (
new Cacheloader<key, graph> () {public
Graph load (key key) {//No checked exception return
Crea Teexpensivegraph (key);
}
);
...
return graphs.getunchecked (key);
The callable mode is as follows:
Cache<key, graph> cache = Cachebuilder.newbuilder ()
. MaximumSize (1000)
. Build ();//Look Ma, no Cacheloader ...
try {
//If The key wasn ' "easy to compute" group, we need to
//do things the hard way.
Cache.get (Key, New Callable<key, graph> () {
@Override public
Value call () throws Anyexception {
return Dothingsthehardway (key);
}
);
catch (Executionexception e) {
throw new Otherexception (E.getcause ());
}
Explicit insertion of
Use the Cache.put (key, value) method to directly insert a value into the cache, which overrides the value mapped before the given key. You can also modify the cache by using any of the methods provided by the Cache.asmap () view. note, however, that any method of the Asmap view does not guarantee that the cached items will be loaded into the cache atomically . Further, the atomic operation of the Asmap view is beyond the guava cache's atomic loading category, so it is compared to Cache.asmap (). Putifabsent (K,
V), Cache.get (K, callable) should always be used preferentially. Cache Recycle
Capacity-based recycling (size-based eviction)
If you want to specify that the number of cached items does not exceed a fixed value, simply use Cachebuilder.maximumsize (long). The cache will attempt to recycle cached items that have not been used recently or are rarely used in general. -Warning: The cache can be recycled before the number of cached items reaches a limit-typically, this happens when the number of cached items approximates the limit.
In addition, different cache entries have different "weights" (weights)--for example, if your cached value occupies a completely different memory space, you can use Cachebuilder.weigher (weigher) to specify a weight function, and the maximum total weight is specified with Cachebuilder.maximumweight (long). In the weight-limiting scenario, it is necessary to consider the complexity of the weight calculation, in addition to the need to pay attention to the value of the weight approximation, and to know that the weights are computed at the time the cache is created.
Loadingcache<key, graph> graphs = Cachebuilder.newbuilder ()
. Maximumweight (100000)
. Weigher (New Weigher<key, graph> () {public
int weigh (Key k, Graph g) {return
g.vertices (). Size ();
}
})
. Build (
new Cacheloader<key, graph> () {public
Graph load (key key) {//No checked exception return
creat Eexpensivegraph (key);
}
);
timed Recycling (Timed eviction)
Cachebuilder provides two types of timing recovery methods:
Expireafteraccess (Long, Timeunit): The cache entry is not read/write access for a given time, then reclaimed. Note that this cache is recycled in the same order as based on size collection.
Expireafterwrite (Long, Timeunit): The cache entry is not write-accessed (created or overwritten) for a given period of time, then reclaimed. This method of recycling is desirable if you think that the cached data always becomes stale after a fixed time.
As discussed below, a timed collection is periodically performed in a write operation and occasionally performed in a read operation.
Reference-based recycling (reference-based eviction)
By using a weakly referenced key, or a weakly referenced value, or a soft reference value, the guava cache can set caching to allow garbage collection:
Cachebuilder.weakkeys (): Storing keys with weak references. Cached items can be garbage collected when there are no other (strong or soft) references to the key. Because garbage collection relies only on identities (= =), caching with weak reference keys is used with = = instead of equals.
Cachebuilder.weakvalues (): Stores a value using a weak reference. Cached items can be garbage collected when there are no other (strong or soft) references on the value. Because garbage collection relies only on identities (= =), caching with weak reference values compares values with = = instead of equals.
Cachebuilder.softvalues (): Stores a value using a soft reference. Soft references are recycled in the order that they are least recently used, only in response to memory needs. Given the performance impact of using soft references, we typically recommend using a more performance-predictable cache size limit (see above, based on capacity recovery). Caching with soft reference values also compares values with = = instead of equals.
explicit cleanup of
At any time, you can explicitly clear the cache entry instead of waiting for it to be reclaimed:
Individual purge: cache.invalidate (key)
Bulk Purge: Cache.invalidateall (keys)
Clear all Cached items: Cache.invalidateall ()
Remove Listener
With Cachebuilder.removallistener (Removallistener), you can declare a listener to do some extra work when the cache entry is removed. When a cache entry is removed, Removallistener gets the Remove notification [removalnotification], which contains the removal reason [Removalcause], the key, and the value.
Note that any exceptions thrown by Removallistener will be discarded after logging to the log [swallowed].
New loader
Cacheloader<key, databaseconnection> loader = new Cacheloader<key, databaseconnection> () { Public
databaseconnection Load (key key) throws Exception {return
openconnection (key);
}
;
New Removallistener
Removallistener<key, databaseconnection> removallistener = new Removallistener<key , databaseconnection> () {public
void Onremoval (Removalnotification<key, databaseconnection> removal) {
DatabaseConnection conn = Removal.getvalue ();
Conn.close (); Tear down properly
}
};
New Cache return
Cachebuilder.newbuilder ()
. Expireafterwrite (2, Timeunit.minutes)
. Removallistener ( Removallistener)
. Build (loader);
Warning: By default, the Listener method is invoked synchronously when the cache is removed. Because cache maintenance and request responses are usually simultaneous, costly listener methods slow down normal cache requests in synchronous mode. In this case, you can use Removallisteners.asynchronous (Removallistener, Executor) to decorate the listener as an asynchronous operation.
clean up when it happens.
A cache built using Cachebuilder does not "automatically" perform cleanup and reclamation work, and does not clean up immediately after a cache entry expires, and there is no such cleanup mechanism. Instead, it does a small amount of maintenance in the event of a write operation, or occasionally in a read operation-if there are too few write operations.
The reason for this is that if you want to automatically continually clean up the cache, you must have a thread that competes with the user for shared locks. In addition, some environment thread creation may be restricted so that Cachebuilder is not available.
Instead, we put the option in your hands. If your cache is high throughput, there is no need to worry about cache maintenance and cleanup work. If your cache only writes occasionally, and you don't want to clean up your work, you can create your own maintenance thread to call Cache.cleanup () at a fixed interval. Scheduledexecutorservice can help you achieve such a timed schedule well.
Refresh
Refreshing and recycling are not the same. As Loadingcache.refresh (K) declares, the refresh is expressed as a key load new value, and the process can be asynchronous. When the refresh operation is in progress, the cache can still return the old values to other threads, unlike the recycle operation, which the read-cached thread must wait for the new value to complete.
If the refresh process throws an exception, the cache retains the old value, and the exception is discarded after logging to the log [swallowed].
Overloaded Cacheloader.reload (K, V) can extend the behavior of the refresh, which allows developers to use the old values when calculating new values.
Some keys do not need to be refreshed, and we want the refresh to be Loadingcache<key completed asynchronously, graph> graphs = Cachebuilder.newbuilder (). MaximumSize (1000)
. Refreshafterwrite (1, timeunit.minutes). Build (New Cacheloader<key, graph> () {
Public Graph Load (key key) {//No checked exception return getgraphfromdatabase (key);
Public Listenablefuture<key, graph> Reload (final key key, Graph Prevgraph) {
if (Neverneedsrefresh (key)) {return futures.immediatefuture (prevgraph);
}else{//Asynchronous!
Listenablefuturetask<key, graph> task=listenablefuturetask.create (New Callable<key, Graph> () {
Public Graph Call () {return getgraphfromdatabase (key);
}
}); Executor.exeCute (Task);
return task; }
}
});
Cachebuilder.refreshafterwrite (long, timeunit) can increase the automatic timing refresh for the cache. In contrast to Expireafterwrite, Refreshafterwrite allows cache entries to remain available through timed refreshes. Note, however, that cached items are not actually refreshed until they are retrieved (if the Cacheloader.refresh implementation is asynchronous, then the retrieval will not be slowed down by the refresh). Therefore, if you declare Expireafterwrite and refreshafterwrite at the same time on the cache, the cache will not be reset blindly because the refresh is not, if the cache entry is not retrieved, then the refresh will not really happen, and the cache entry can be recycled after the expiration time. Other features
Statistic
Cachebuilder.recordstats () is used to open the statistic function of guava cache. When statistics are turned on, the Cache.stats () method returns the Cachestats object to provide statistical information such as the following:
Hitrate (): Cache hit rate;
Averageloadpenalty (): The average time to load the new value, the unit is nanosecond;
Evictioncount (): The total number of cached items that are reclaimed, excluding explicit scavenging.
In addition, there are many other statistical information. These statistics are critical to adjusting cache settings, and we recommend paying close attention to these data in applications with high performance requirements.
Asmap View
The Asmap view provides a concurrentmap form of caching, but the interaction of the Asmap view with the cache requires attention:
Cache.asmap () contains all the items currently loaded into the cache. Accordingly, Cache.asmap (). Keyset () contains all the currently loaded keys;
Asmap (). Get (key) is essentially equivalent to cache.getifpresent (key) and does not cause the cache entry to load. This is consistent with the semantic conventions of map.
All read and write operations reset the access time of the associated cache entry, including the Cache.asmap (). Get () method and Cache.asmap (). Put (K, V) method, but excluding Cache.asmap (). ContainsKey ( Object) method, and does not include actions on the collection view of Cache.asmap (). For example, traversing the Cache.asmap (). EntrySet () does not reset the read time of the cached entry.
interrupted
Cache load methods, such as Cache.get, do not throw interruptedexception. We can also allow these methods to support interruptedexception, but this support is doomed to be incomplete and will increase the cost of all users, with only a few users actually benefiting. Please continue reading for details.
Cache.get requests to a value that is not cached, it encounters two situations: the current line Chengga, or another thread that is loading the value. The interruptions in both cases are not the same. Waiting for another thread that is loading the value is a simpler scenario: interrupt support is implemented with interruptible wait, but when the front Chengga load is more complicated: because the cacheloader of the load value is provided by the user, if it is interruptible, we can also implement a support interrupt, Otherwise there's nothing we can do.
If the user-supplied Cacheloader is interruptible, why not let Cache.get also support interrupts. In a sense, in fact, is supported: if Cacheloader throws Interruptedexception, Cache.get will return immediately (as is the case with other exceptions); In addition, in a thread that loads the cached value, Cache.get captures the interruptedexception and then recovers the interrupt, while interruptedexception in the other thread is packaged as a execution Exception.
In principle, we can remove the wrapper and turn the executionexception into interruptedexception, but this will allow all Loadingcache users to handle interrupt exceptions, even if the cacheloader they provide is not interruptible. This may be worthwhile if you take into account that all load-less threads can still be interrupted. But many caches are only used in a single thread, and their users still have to catch interruptedexception exceptions that cannot be thrown. Even those users who share the cache across threads are only able to interrupt their get calls sometimes, depending on which line enters upgradeable makes the request.
Our guiding principle for this decision is that the cache always behaves as if it were Chengga on the current line. This principle allows you to simply switch between using the cache or calculating values each time. If the old code (the code that loads the value) is not interruptible, the new code (the code that uses the cached load value) should probably be uninterrupted.
As noted above, guava cache supports interrupts in a sense. In another sense, the guava cache does not support interrupts, which makes Loadingcache a vulnerable abstraction: when the loading process is interrupted, it is treated like any other exception, which is possible in most cases, but if multiple threads are waiting to load the same cache entry, Even if the load thread is interrupted, it should not let the other threads fail (capturing the interruptedexception wrapped in executionexception), and the correct behavior is to have the remaining line Chenghong the test load. To do this, we have documented a bug. However, rather than risk fixing the bug, we may be able to devote more effort to implementing another recommendation Asyncloadingcache, which returns a future object with the correct interrupt behavior.