A simple description of the source package:
Com.google.common.annotations: Common annotation type.
Com.google.common.base: Basic tool class libraries and interfaces.
Com.google.common.cache: Caching toolkit, a very easy-to-use and powerful in-JVM cache.
Com.google.common.collect: A collection interface with generics extension and implementation, as well as tool classes, where you'll find lots of fun collections.
Com.google.common.eventbus: Publish a subscription-style event bus.
Com.google.common.hash: Hash toolkit.
COM.GOOGLE.COMMON.IO:I/O Toolkit.
Com.google.common.math: Primitive arithmetic type and extra large number of arithmetic toolkit.
Com.google.common.net: Network toolkit.
Com.google.common.primitives: Eight static kits of primitive types and unsigned types.
Com.google.common.reflect: Reflection Kit.
Com.google.common.util.concurrent: Multithreading toolkit.
ImportCom.google.common.cache.CacheBuilder;ImportCom.google.common.cache.CacheLoader;ImportCom.google.common.cache.LoadingCache;ImportOrg.slf4j.Logger;Importorg.slf4j.LoggerFactory;ImportJava.util.concurrent.TimeUnit;/*** Created by Hxiuz on 2018/2/10.*/ Public classTokencache {Private StaticLogger Logger = Loggerfactory.getlogger (Tokencache.class); //more than 10000 use the LRU algorithm [cache culling algorithm] to clear the cache Public Staticloadingcache<string,string> LocalCache = Cachebuilder.newbuilder (). Initialcapacity (+). MaximumSize (10000) . expireafteraccess (10, Timeunit.minutes). Build (NewCacheloader<string, string>() { //The default data load implementation, when the get value is called, if the key does not have a corresponding value, call this method to load@Override PublicString load (string s)throwsException {return"NULL"; } }); Public Static voidSetkey (String key,string value) {localcache.put (key, value); } Public Staticstring GetKey (String key) {String value=NULL; Try{Value=Localcache.get (key); if("null". Equals (value)) { return NULL; } returnvalue; }Catch(Exception e) {//Print Exception StacksLogger.error ("LocalCache get Error", E); } return NULL; }}
LRU Algorithm Development
Principle
The core idea of LRU (Least recently used, the least recently used) algorithm is to retire data based on the historical access records of the data, "if the data has been accessed recently, the chances of future access are higher."
Implementation diagram
The most common implementation is to use a linked list to save the cached data, the detailed algorithm is implemented as follows:
1. Inserting new data into the list head;
2. Whenever the cache hits (that is, the cached data is accessed), the data is moved to the list header;
3. When the list is full, discard the data at the end of the list.
Analysis
Hit rate
When there is hot data, LRU efficiency is very good, but the occasional, periodic batch operation will cause the LRU hit rate drops sharply, the cache pollution is more serious.
"Complexity"
Simple to implement.
Cost
A hit will need to traverse the linked list, find the hit block index, and then need to move the data to the head.
Tokencache and LRU algorithms for retrieving passwords using guava