Java Implementation cache (LRU,FIFO)

Source: Internet
Author: User

Blow your cool and bask in the sun. Better to write something, ha ha ha haha .... Today, how to use Java to implement the cache, this topic a lot of interviews will be asked. Say it today.

1. Why is Java implementation cached?

As the amount of concurrent software or Web page increase is very large, a large number of requests direct operation of the database, the data will cause great pressure. Processing a large number of requests and connections can take a long time. And we know that 70% of the data in the database does not need to be modified, it can be introduced into the cache to read, reduce the pressure on the database.

The usual caches are redis and memcached, but sometimes small scenes can be used to implement caching directly in Java to meet the needs of this part of the service.

Caches are mainly LRU and FIFO,LRU is least recently used abbreviation, that is, the most recent unused, FIFO is first-out,

The following is the use of Java to implement both caches

One. The idea of LRU caching

Fixed cache size, need to allocate a fixed size to the cache

Each read cache alters the cache's usage time and refreshes the time the cache is present.

You need to delete the most recent, oldest, unused cache after the cache is full, and then add the cache.

According to the above idea we can use Likedhashmap to implement LRU caching

When it returns true, the oldest element is removed, which can be overridden by overriding this method to control the deletion of the cache element, and when the cache is full, it is possible to delete the oldest unused element by returning true to meet the LRU requirements. This will satisfy the 3rd requirement above.

protected boolean removeEldestEntry(Map.Entry<K,V> eldest) { return false;}

Since Linkedhashmap is automatically expanded, it automatically expands twice times when the elements in the table array are larger than capacity * loadfactor. However, in order for the cache size to be fixed, it is necessary to pass in the capacity size and load factor at initialization time.
In order for the cache size to arrive to be set without automatic expansion, the size of the initialization needs to be calculated and then passed in, the initialization size can be set to(cache Size/Loadfactor) + 1So that when the number of elements reaches the cache size, there is no capacity to scale. This solves the 1th question above.


Code implementation
 Packagecom.huojg.test.Test;ImportJava.util.LinkedHashMap;ImportJava.util.Map;ImportJava.util.Set; Public classLrucache<k, v> {    Private Final intmax_cache_size; Private Final floatDefault_load_factory = 0.75f; Linkedhashmap<k, v>map;  PublicLRUCache (intcacheSize) {Max_cache_size=cacheSize; intCapacity = (int) Math.ceil (max_cache_size/default_load_factory) + 1; /** The third parameter is set to True, which means that the LinkedList is sorted by access order and can be set to false as the LRU cache * Third parameter, which is sorted by insert order and can be used as FIFO cache*/Map=NewLinkedhashmap<k, v> (capacity, default_load_factory,true) {@Overrideprotected BooleanRemoveeldestentry (Map.entry<k, v>eldest) {                returnSize () >max_cache_size;    }        }; }     Public synchronized voidput (K key, V value) {map.put (key, value); }     Public synchronizedV get (K key) {returnMap.get (key); }     Public synchronized voidRemove (K key) {map.remove (key); }     Public synchronizedSet<map.entry<k, v>>GetAll () {returnMap.entryset (); } @Override PublicString toString () {StringBuilder StringBuilder=NewStringBuilder ();  for(Map.entry<k, v>Entry:map.entrySet ()) {Stringbuilder.append (String.Format ('%s:%s ', Entry.getkey (), Entry.getvalue ())); }        returnstringbuilder.tostring (); }     Public Static voidMain (string[] args) {LRUCache<integer, integer> lru1 =NewLrucache<> (5); Lru1.put (1, 1); Lru1.put (2, 2); Lru1.put (3, 3);        System.out.println (LRU1); Lru1.get (1);        System.out.println (LRU1); Lru1.put (4, 4); Lru1.put (5, 5); Lru1.put (6, 6);    System.out.println (LRU1); }}

Result output:

1:1  2:2  3:3  2:2  3:3  1:1  3:3  1:1  4:4  5:5  6:6  

Implements the idea of LRU caching

FIFO

FIFO is first-out, can be implemented using LINKEDHASHMAP.
When the third parameter is passed to False or is the default, it is possible to implement the FIFO cache by ordering in the order of insertion.

The implementation code is basically consistent with the code above using Linkedhashmap to implement LRU, mainly because the values of the constructors are somewhat different.

 Packagecom.huojg.test.Test;ImportJava.util.LinkedHashMap;ImportJava.util.Map;ImportJava.util.Set; Public classFifocache<k, v> {    Private Final intmax_cache_size; Private Final floatDefault_load_factory = 0.75f; Linkedhashmap<k, v>map;  PublicFifocache (intcacheSize) {Max_cache_size=cacheSize; intCapacity = (int) Math.ceil (max_cache_size/default_load_factory) + 1; /** The third parameter is set to True, which means that the LinkedList is sorted by access order and can be set to false as the LRU cache * Third parameter, which is sorted by insert order and can be used as FIFO cache*/Map=NewLinkedhashmap<k, v> (capacity, default_load_factory,false) {@Overrideprotected BooleanRemoveeldestentry (Map.entry<k, v>eldest) {                returnSize () >max_cache_size;    }        }; }     Public synchronized voidput (K key, V value) {map.put (key, value); }     Public synchronizedV get (K key) {returnMap.get (key); }     Public synchronized voidRemove (K key) {map.remove (key); }     Public synchronizedSet<map.entry<k, v>>GetAll () {returnMap.entryset (); } @Override PublicString toString () {StringBuilder StringBuilder=NewStringBuilder ();  for(Map.entry<k, v>Entry:map.entrySet ()) {Stringbuilder.append (String.Format ('%s:%s ', Entry.getkey (), Entry.getvalue ())); }        returnstringbuilder.tostring (); }     Public Static voidMain (string[] args) {Fifocache<integer, integer> lru1 =NewFifocache<> (5); Lru1.put (1, 1); Lru1.put (2, 2); Lru1.put (3, 3);        System.out.println (LRU1); Lru1.get (1);        System.out.println (LRU1); Lru1.put (4, 4); Lru1.put (5, 5); Lru1.put (6, 6);    System.out.println (LRU1); }}

Result output

1:1  2:2  3:3  1:1  2:2  3:3  2:2  3:3  4:4  5:5  6:6  

The above is the use of Java to implement these two kinds of caching, it can be seen that linkedhashmap implementation of the cache is easier, because the underlying function has been supported, the implementation of the list of the LRU cache is also a reference to LINKEDHASHMAP implementation of the idea. In Java, not just these two data structures can be implemented cache, such as Concurrenthashmap, weakhashmap in some scenarios can also be used as a cache, in the end, which data structure is mainly to see the scene to choose, but a lot of ideas can be universal.






Java Implementation cache (LRU,FIFO)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.