Background
For high-frequency access but low-frequency updated data we generally do cache, especially in the high concurrency of the business, the original means we can use HashMap or Concurrenthashmap to store.
There is nothing wrong with this, but there is a problem, for the data in the cache only when we show the call to remove method, will remove an element, even if the high-frequency data, there will be access hit rate of the points, memory is always limited, we can not infinitely increase the data in the map.
I hope the more perfect scene when. For a business, I just want to allocate to you 2k of memory, we assume that a map in a data (key value pair) is 1B, then up to 2048 data can be saved, when the data reached this magnitude, it is necessary to eliminate some of the low access rate data to the new data to make a place, Using traditional hashmap is harder to implement because we don't know which data access rate is low (unless it's dedicated to logging), so guava a component for memory cache optimization.
Get ready
It says that we need an elimination strategy to automatically filter the cache data, below a simple understanding of the following, several elimination algorithms
FIFO: This elimination strategy, as the name implies, is first eliminated. It's simple and brutal, but it kills some high-frequency access data.
Least Recently used algorithm (LRU): This algorithm can effectively optimize the FIFO problem, high-frequency access to the data is not easy to be eliminated, but can not be completely avoided. Guavacache some features conform to this algorithm
Least recent frequency algorithm (LFU): This algorithm also optimizes LRU, records the number of accesses per data, combined access time and number of visits to retire data.
Guava Cache Basics
Guavacache provides a thread-safe implementation mechanism that is easy to use, low-cost, and can be considered when you need to use memory as a caching business scenario.
The Guavacache caching mechanism has two interfaces, the cache and the Loadingcache, which is also an interface, inherits from the cache, and a few additional interfaces, if we want to instantiate a cache object, we also need to understand a cachebuilder class, This class is the rain has never built the cache object, we first use Cachebuilder to instantiate a cache object and then learn some of its field meaning.
Public Static void Main (string[] args) { Cache<String,String> myMap = cachebuilder.newbuilder () . Expireafteraccess (30L, Timeunit.seconds) . Expireafterwrite (3L, timeunit.minutes) . Concurrencylevel (6) . Initialcapacity(+) maximumsize(+). Softvalues () . Build (); Mymap.put ("name", "Zhang San"); System.out.println (Mymap.getifpresent ("name"));}
This allows us to create a cache object like the map interface that describes the object created above:
a cache object has been created that has an initial size of 100 (100 key-value pairs), a maximum size of 1000, is automatically removed after the data has been written for 3 minutes, and the data is removed if it is not accessed within 30 seconds. In addition, the object of this map structure supports up to 6 callers updating the data of this cache structure at the same time, that is, the maximum number of concurrent update operations is 6.
We see that there is also a softvalues () property does not speak, will be placed in the following description, in fact, Cachebuilder is not only so many properties can be set, let us specifically say.
Some of the commonly used attribute fields in Cachebuilder:
concurrencylevel (int): Specifies the number of operations allowed to update simultaneously, if not set Cachebuilder defaults to 4, this parameter affects the block of cache storage space, it can be simply understood that the default is to create a specified size map, Each map is called a chunk, and the data is stored in each map, and we set the size of the value according to the actual need.
initialcapacity (int): Specifies the amount of space to initialize the cache, if set to 40, and concurrencylevel default, is divided into 4 chunks, the largest size of each chunk is 10, when the data is updated, This block is locked, which is why it is allowed to update at the same time the number of operations is 4, extend a little, in the elimination of data, but also each block to maintain their own elimination strategy. That is, if each chunk size is too large, the competition will be fierce.
maximumsize (Long): Specifies the maximum cache size. When the cache data reaches its maximum value, it will eliminate some infrequently used data according to the policy, it is important to note that when the amount of cached data is approaching the maximum, the data will be recycled, which is simply understood as "preventing "Right
The bottom three parameters are, Softvalues (), Weakkeys (), Weakvalues (), before explaining the three parameters, we need to look at the soft references in Java and the weak references.
And weak references are strong references, which is also the most commonly used in our coding process, we declare variables, objects, basically strong references, such objects, the JVM will not be recycled in GC, even if it throws oom.
The weak reference is not the same, in Java, the value declared with Java.lang.ref.WeakReference, the JVM will be garbage collected when it is recycled, then the soft reference? is the object marked with SoftReference, declared as a weak reference, will be recycled when the JVM is running out of memory.
See the difference, a simple summary is that the soft reference, only when the memory is not possible to be recycled, the normal garbage collection will not be recycled, weak references, will be deleted when the JVM garbage collection.
softvalues (): Sets the data in the cache to softvalues mode. The data uses the SoftReference class declaration, which is to store the real data in the SoftReference instance. Data that is set for Softvalues () is hosted by the global garbage collection manager and periodically GC data according to the LRU principle. After the data is GC, it may still be counted by the size method, but execution of the read or write method is invalid
Weakkeys () and Weakvalues (): When set to Weakkey or weakvalues, use (= =) to match key or value values (the Equals method is used when the default strong reference), in which case The data may be GC, after the data is GC, it may still be counted by the size method, but it is invalid to execute the read or write method on it.
Guava cache usage in spring projects
Here's a look at the actual application I'm using in the project. If the guava cache is integrated in the spring project
1. Introducing Guava's Maven dependency
<Dependency> <groupId>Com.google.guava</groupId> <Artifactid>Guava</Artifactid> <version>26.0-jre</version></Dependency>
The version I used above is the most recent version I was writing this note to.
2. Join the configuration in Application-context.xml
<!--turn on cache annotations - <Cache:annotation-driven/> <BeanID= "CacheManager"class= "Org.springframework.cache.guava.GuavaCacheManager"> < Propertyname= "Cachespecification"value= "Initialcapacity=500,maximumsize=5000,expireafteraccess=2m,softvalues" /> < Propertyname= "Cachenames"> <List> <value>Questioncreatedtrack</value> </List> </ Property> </Bean>
In the above configuration we implemented a CacheManager, which must be configured, the default configuration is Org.springframework.cache.support.SimpleCacheManager, We've changed it here to the implementation of the guava cache manager. If you use other implementations, such as Redis, you only need to configure the relevant cache manager for Redis to
CacheManager can be simply understood as the place where the cache is saved, the cache has the data we want to cache, generally in the form of Key-value key-value pairs
The two attributes declared in the bean of the above configuration, one is cachespecification, do not need to say, refer to the above detailed parameters, need to know that the parameter here is using the Cachebuilderspec class, Creates a Cachebuilder instance in the form of parsing a string representing a Cachebuilder configuration
Cachenames can be named according to its actual business, declaring multiple
3. Use spring's cache-related annotations in your code
@Cacheable (value = "Questioncreatedtrack", key= "#voiceId", condition = "#voiceId >0") Publiclong Getquestionidbyvoiceid (long Anchorid, long Voiceid) {String key=String.Format (Homework_question_anchor_key, Anchorid); String value=Redisproxy.getvalue (Key, String.valueof (Voiceid)); returnStringutils.isempty (value)?NULL: Long.parselong (value); } @CachePut (Value= "Questioncreatedtrack", key = "#voiceId", condition = "#voiceId >0") Publiclong Statcollectionquestiontocache (long Anchorid, long Voiceid, long QuestionID) {String key=String.Format (Homework_question_anchor_key, Anchorid); Redisproxy.setonetohash (Key, String.valueof (Voiceid), string.valueof (QuestionID)); returnQuestionID; } @CacheEvict (Value= "Questioncreatedtrack", key= "#voiceId") Public voidRemovecollectionquestionfromcache (Long Anchorid, long Voiceid) {String key=String.Format (Homework_question_anchor_key, Anchorid); Redisproxy.deleteonetohash (Key, String.valueof (Voiceid)); }
First of all, the logic here, I mainly use memory to do a first-level cache, Redis to do the second level of caching, the above three methods are the role of
Getquestionidbyvoiceid (..): Through Voiceid query QuestionID, using the @cacheable annotation tag means that when the code executes to this method, it will go to the guava cache first to find, Some words directly return do not go method, do not have to go to execute method, return to add the cache at the same time, the value of the buffer structure is the return value of the method, key is the method in the parameter of the Vocieid, here the cache structure is, Key=voiceid,value=questionid
Statcollectionquestiontocache (): The logic of the method is to save Voiceid and QuestionID into Redis, using the @cacheput annotation tag means, do not go to the cache to find, directly execute the method, After executing the method, add the key-value pair to the cache.
REmovecollectionquestionfromcache (): The logic of the method is to delete the data of key Voiceid in Redis, using the @cacheevict annotation tag meaning, Clears the data for key Voiceid in the cache
Through the above three annotations, you can achieve such a function, when the corresponding QuestionID query voiceid=123, will go to the cache, the cache if not to go to Redis, some words at the same time to join the cache, (no, will also join the cache, This will be said below), and the Redis and the cache will synchronize when new data is added and data is removed.
@Cacheable method Query result is null how to handle
This requires us to decide whether or not to cache data that is null for the query, as needed, and to use the following annotations if not needed
@Cacheable (value = "Questioncreatedtrack", key= "#voiceId", condition = "#voiceId >0",unless = "#result ==null" )
Resources
Http://www.voidcn.com/article/p-pvvfgdga-bos.html
Www.cnblogs.com/fashflying/p/6908028.html