Java high concurrency cache with guava caches

Source: Internet
Author: User
Tags google guava

I. BACKGROUND

Cache is our development in order to improve the performance of the system, the data of the frequent access to the business first put the processing results in the cache, the second time without the same business data in the re-processing, which improves the performance of the system. There are several types of caches:

(1) Local cache.

(2) Database cache.

(3) Distributed cache.

Distributed cache is commonly used memcached ,memcached is a high-performance distributed memory cache server, caching business processing results, Reduce the number of database accesses and the same complex logic processing time to improve the speed and scalability of dynamic WEB applications.

Two. Local cache issues under high concurrency and resolution

Today we are introducing the local cache cache, which we use Java.util.concurrent.ConcurrentHashMap to save,concurrenthashmap is a thread-safe HashTable , and provides a set of and HashTable the same functional but thread-safe approach, Concurrenthashmap can be done to read data without locking, improve the concurrency capability. We do not consider the memory element recycling or memory overflow in the storage of data , we use concurrenthashmap to simulate the local cache, when in a high concurrency environment, what will happen?


This way we use multiple threads to simulate high concurrency scenarios.


First: Let's look at the code first:


public class Testconcurrenthashmapcache<k,v> {private Final concurrenthashmap<k, v> cachemap=new Concurrenthashmap<k,v> (); Public Object GetCache (K keyvalue,string threadname) {System.out.println ("ThreadName getcache==============" + ThreadName); Object value=null;//gets data from the cache Value=cachemap.get (keyValue);//If not, put the data in the cache if (Value==null) {return Putcache (keyvalue,threadname);} return value;} Public Object Putcache (K keyvalue,string threadname) {System.out.println ("ThreadName executes business data and returns data for processing results (Access database, etc.) ======= ======= "+threadname);//can obtain data from the database according to the business, the simulation has obtained the data on this side @suppresswarnings (" unchecked ") V value= (v)" datavalue ";/ Put the data into the cache Cachemap.put (KeyValue, value); return value;} public static void Main (string[] args) {final testconcurrenthashmapcache<string,string> testguava=new Testconcurrenthashmapcache<string,string> ();    Thread T1=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t1======start========");    Object Value=testguava.getcache ("Key", "T1"); SYstem.out.println ("T1 value==============" +value); System.out.println ("t1======end========");}); Thread T2=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t2======start========"), Object    Value=testguava.getcache ("Key", "T2"); System.out.println ("T2 value==============" +value); System.out.println ("t2======end========");}); Thread T3=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t3======start========"), Object    Value=testguava.getcache ("Key", "T3"); System.out.println ("T3 value==============" +value); System.out.println ("t3======end========");}); T1.start (); T2.start (); T3.start ();}}
Let's take a look at the results:

We implemented the local cache code, we executed the results, found that when there is a multi-threaded, there is no cache in the cache, the same execution of business data and return the processing of data, we analyze this situation:

(1) When the thread T1 accesses cachemap , the business data is processed according to the business to the background, and the processing data is returned and put into the cache .

(2) When the thread T2 accesses the cachemap , it also handles the business data and returns processing data and puts it into the cache based on the business to the background.

The second type:

So the same business and processing two times, if the same business in high concurrency in the case of more than two times, so that we did not match the cache, when we think of Java Multi-threading, in the execution of the fetch cache plus Synchronized, the code is as follows:

public class Testconcurrenthashmapcache<k,v> {private Final concurrenthashmap<k, v> cachemap=new Concurrenthashmap<k,v> (); Public <span style= "color: #ff0000;" >synchronized </span>object GetCache (K keyvalue,string threadname) {System.out.println ("ThreadName GetCache ============== "+threadname"), Object value=null;//obtains data from the cache Value=cachemap.get (keyValue);//If not, put the data in the cache if (value== NULL) {return Putcache (keyvalue,threadname);} return value;} Public Object Putcache (K keyvalue,string threadname) {System.out.println ("ThreadName executes business data and returns data for processing results (Access database, etc.) ======= ======= "+threadname);//can obtain data from the database according to the business, the simulation has obtained the data on this side @suppresswarnings (" unchecked ") V value= (v)" datavalue ";/ Put the data into the cache Cachemap.put (KeyValue, value); return value;} public static void Main (string[] args) {final testconcurrenthashmapcache<string,string> testguava=new Testconcurrenthashmapcache<string,string> (); Thread T1=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t1======start========");    Object Value=testguava.getcache ("Key", "T1"); System.out.println ("T1 value==============" +value); System.out.println ("t1======end========");}); Thread T2=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t2======start========"), Object    Value=testguava.getcache ("Key", "T2"); System.out.println ("T2 value==============" +value); System.out.println ("t2======end========");}); Thread T3=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t3======start========"), Object    Value=testguava.getcache ("Key", "T3"); System.out.println ("T3 value==============" +value); System.out.println ("t3======end========");}); T1.start (); T2.start (); T3.start ();}}
The results of the execution:

This implements the serial, when High and released, there will be no second access to the same business, must be obtained from the cache, but plus Synchronized into serial, so high and release performance is also reduced.

The third type:

In order to achieve performance and cache results, we use the futurebecause the future is acquired when the calculation is complete, otherwise it will be blocked until the task is transferred to the completed state and Concurrenthashmap. Putifabsent method, the code is as follows:

    
public class Testfuturecahe<k,v> {private Final concurrenthashmap<k, future<v>> cachemap=new Concurrenthashmap<k, future<v>> (); Public Object GetCache (K keyvalue,string threadname) {future<v> value=null;try{system.out.println ("ThreadName getcache============== "+threadname);//Get Data Value=cachemap.get (KeyValue) from the cache;//if not, put the data in the cache if (value==null) { Value= Putcache (keyvalue,threadname); return Value.get ();} return Value.get ();} catch (Exception e) {}return null;} Public future<v> Putcache (K keyvalue,final String threadname) {////put data into cache future<v> Value=null; Callable<v> callable=new callable<v> () {@SuppressWarnings ("unchecked") @Overridepublic V call () throws Exception {//can obtain data from the database according to the business, and this way simulates having obtained the data System.out.println ("ThreadName the data that executes the business data and returns the processing result (access to the database, etc.) ============== "+threadname"); return (V) "DataValue";}}; Futuretask<v> futuretask=new futuretask<v> (callable); Value=cachemap.putifabsent (KeyValue, FutureTask) ; if (value==null) {valUe=futuretask;futuretask.run ();} return value;} public static void Main (string[] args) {final testfuturecahe<string,string> testguava=new testfuturecahe< String,string> (); Thread T1=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t1======start========"), Object Value=testguava.getcache ("Key", "T1"); System.out.println ("T1 value==============" +value); System.out.println ("t1======end========");}); Thread T2=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t2======start========"), Object Value=testguava.getcache ("Key", "T2"); System.out.println ("T2 value==============" +value); System.out.println ("t2======end========");}); Thread T3=new Thread (new Runnable () {@Overridepublic void run () {System.out.println ("t3======start========"), Object Value=testguava.getcache ("Key", "T3"); System.out.println ("T3 value==============" +value); System.out.println ("t3======end========");}); T1.start (); T2.start (); T3.start ();}}

ThreadT1or ThreadT2AccessCachemap, if none, then executes theFuturetaskto complete the asynchronous task, if the threadT1performed aFuturetask, and save toConcurrenthashmapin, throughPUtifabsent method, becauseputifabsentmethod if it does not existKeycorresponds to a value, thevalueinKeyJoinMap, or returnKeythe corresponding old value. This time the threadT2when you come in, you get Futureobject, if no value is OK, this is the object's reference, etc.Futuretaskafter execution, in the passageGetreturn.

We solve the problem of high concurrency access cache, can be recycled elements of these, are not, easy to cause memory overflow,Google Guava cache in these issues are very good, and then we introduce.

three. Introduction and application of Google guava cache



http://www.java2s.com/Code/Jar/g/Downloadguava1401jar.htm Download the corresponding Jar package


Guava Cache and concurrentmap very similar,Guava cache can be set to recover, can solve the problem of large data memory overflow, the source code is as follows:



public class Testguava<k,v> {
Private cache<k, v> cache= cachebuilder.newbuilder (). MaximumSize (2). Expireafterwrite (Timeunit.minutes). Build ();
Public Object GetCache (K keyvalue,final String threadname) {
Object Value=null;
try {
System.out.println ("ThreadName getcache==============" +threadname);
Fetching data from the cache
Value = Cache.get (KeyValue, New callable<v> () {
@SuppressWarnings ("Unchecked")
Public V call () {
System.out.println ("ThreadName executes business data and returns data for processing results (Access database, etc.) ==============" +threadname);
Return (V) "DataValue";
}
});
} catch (Executionexception e) {
E.printstacktrace ();
}
return value;
}





public static void Main (string[] args) {
Final testguava<string,string> testguava=new testguava<string,string> ();


Thread T1=new Thread (new Runnable () {
@Override
public void Run () {

System.out.println ("t1======start========");
Object Value=testguava.getcache ("Key", "T1");
System.out.println ("T1 value==============" +value);
System.out.println ("t1======end========");

}
});

Thread T2=new Thread (new Runnable () {
@Override
public void Run () {
System.out.println ("t2======start========");
Object Value=testguava.getcache ("Key", "T2");
System.out.println ("T2 value==============" +value);
System.out.println ("t2======end========");

}
});

Thread T3=new Thread (new Runnable () {
@Override
public void Run () {
System.out.println ("t3======start========");
Object Value=testguava.getcache ("Key", "T3");
System.out.println ("T3 value==============" +value);
System.out.println ("t3======end========");

}
});

T1.start ();
T2.start ();
T3.start ();


}


}


Description:

Cachebuilder.newbuilder () can be followed by a collection of settings:

(1)maximumsize (Long): Sets the size of the capacity to start recycling.

(2)expireafteraccess (Long, Timeunit): The time period is not read / Write access, it will be recycled.

(3)expireafterwrite (Long, Timeunit): This time period is not written access, it will be recycled .

(4) Removallistener (Removallistener): Listens for events and listens when elements are deleted.

The results of the execution:




Java high concurrency cache with guava caches

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.