Imagine a scenario under high concurrency: Suppose we store name=aty in the cache and set the expiration time. When the cache expires, there are exactly 10 client-initiated requests that need to read the value of name. Using the guava cache ensures that only one thread is allowed to load data (such as from a database), while other threads wait for the thread to return results. This avoids a large number of user requests to penetrate the cache.
Import Com.google.common.base.Stopwatch;
Import Com.google.common.cache.CacheBuilder;
Import Com.google.common.cache.CacheLoader;
Import Com.google.common.cache.LoadingCache;
Import Java.util.UUID;
Import Java.util.concurrent.CountDownLatch;
Import Java.util.concurrent.TimeUnit; public class Main {///1s no access cache expires, each time loading a key takes time 2s private static loadingcache<string, string> cache = Cach
Ebuilder.newbuilder (). expireafteraccess (1, timeunit.seconds). Build (New cacheloader<string, string> () { @Override public string Load (string key) throws Exception {System.out
. println ("Begin to query db ...");
Thread.Sleep (2000);
System.out.println ("Success to query db ...");
Return Uuid.randomuuid (). toString ();
}
});
private static Countdownlatch latch = new Countdownlatch (1); public static void Main (string[] args) throws Exception {cache.put ("name", "Aty");
Thread.Sleep (1500);
for (int i = 0; i < 8; i++) {startthread (i);
}//Let the thread run Latch.countdown ();
} private static void startthread (int id) {Thread t = new Thread (new Runnable () {@Override public void Run () {try {System.out.println (Thread.CurrentThread (). GetName ()
+ "... begin");
Latch.await ();
Stopwatch watch = stopwatch.createstarted ();
System.out.println ("Value ..." + cache.get ("name"));
Watch.stop ();
System.out.println (Thread.CurrentThread (). GetName () + "... finish,cost time=" + watch.elapsed (timeunit.seconds));
} catch (Exception e) {e.printstacktrace ();
}
}
});
T.setname ("thread-" + ID);
T.start (); }
}
The above output shows that only one thread is going to load the data in the database, and the other threads are waiting (each thread is time consuming 2s). Using guava can do this: for the same key, no matter how many requests, only one thread will be allowed to load the data.
But there is also a fatal flaw: in the above 8 threads, one thread actually loads the data, and the remaining 7 threads are blocked. If you can do this, when a thread loads the data and the rest of the thread finds that the data is loading, it reads the old data directly so that it does not block. Since it is a cache, reading the old data is not much of a problem, but it can improve the system swallow metric.