I. How cache works
The principle of caching is that when the CPU reads a piece of data, it first looks up from the cache, reads it immediately after it finds it, and sends it to the CPU for processing. If no data is found, it will read data from the database at a relatively slow speed.
It is handed over to the CPU for processing, and then the corresponding data blocks in the database are transferred to the cache. Later, when the same data is read again, it can be read from the cache, which is faster. This mechanism allows the CPU to read the cache with a very high hit rate (about 90%), that is, about 90% of the data is read in the cache next time, you only need to read about 10% in the memory, which greatly saves the time for the CPU to directly read the memory, and basically does not need to wait when the CPU goes to the data. In general, the CPU reads data from the cache before reading the memory.
Ram is opposite to Rom. After a ram instance loses power, its information disappears. After a rom instance loses power, its information does not disappear. There are two types of RAM: static RAM, SRAM, dynamic RAM, and dram. The former is much faster than the latter, and the memory usage is generally dynamic RAM. To increase the speed of the system, is it enough to expand the cache? The larger the size, the more data is cached, and the faster the system is? The cache is usually static RAM, and the speed is very fast, but the static RAM set layer is low (the storage of the same search, the static RAM volume is 6 times that of dynamic RAM ), the price is high (4 times the same capacity). Therefore, it is silly to expand static cache. However, to improve performance and speed, you must expand the cache, there is a compromise, instead of expanding the static cache, but adding some high-speed dynamic cache. These high-speed RAM is faster than the conventional dynamiC Rom, but a little slower than static RAM, the original static cache is called a level-1 cache, and the added dynamic RAM is called a level-2 cache.
To understand the basic concepts of the cache system, Let's explain it through a simple example of a bookkeeper.Concept of High-speed cache. Imagine a librarian sitting behind the table. His job is to find the book you want to borrow. For the sake of simplicity, we assume that you are not allowed to retrieve the book, but must ask the librarians to help you retrieve the book you want to borrow. So he will extract the book from the vault's shelf (this is the way the Library of Congress in Washington, DC uses this method ). First, we should start with a Book Administrator without caching.
The first customer came. He wants to borrow whale. The librarian finds the book in the warehouse and returns to the counter to deliver it to the customer. After a while, the customer came back and returned the book to the librarians. The librarian accepts the book and places it back in the warehouse. Then he returned to the counter and waited for the next customer. We assume that the next customer will also borrow whale (you have returned this book ). The librarian had to go back to the warehouse to find the book he just released and hand it over to the customer. If you work in this way, the librarians have to return each book to the warehouse once, even for those books that are very popular and have a high borrowing rate. Is there a way to improve the efficiency of Librarians?
Of course, we can give the librariansCacheTo solve this problem. In the next section, we will still use this example. The difference is that the librarians will use the cache system.
We gave the librarians a backpack that he could use to pack ten books (in computer terms, the librarians now have a cache that can hold ten books ). He can use this backpack to pack up to ten books the customer has returned to him. The previous example is used below, but now the librarians can use the improved high-speed cache method.
Start a new day. The library administrator's backpack is empty. Our first customer came and wanted to borrow whale. There is no clever way, the librarians must go to the warehouse to get the book. He handed the book to the customer. After a while, the customer came back and gave the book back to the librarians. Instead of putting the book back in the warehouse, the librarians put it in the backpack and continue to receive readers (he will first check whether the backpack is full and then view it more frequently ). Another customer came to borrow whale. Before going to the warehouse, the librarians should check whether the book exists in the backpack. So he found the book! All he has to do is take it out of his backpack and hand it over to the customer. Because you do not need to go to the warehouse to retrieve books, you can provide customers with services faster.
What if the book you want to borrow is not in the cache (Backpack? In this case, the efficiency of the librarians when there is a cache is lower than that when there is no cache, because the librarians need to take the time to check whether the book exists in the backpack. A major challenge in cache design is the need to minimize the impact of search cache, which modern hardware has almost reduced to zero. Even in the simple example of our librarians, the latency (waiting time) in the search cache is so small that it does not matter. Because the cache is relatively small (ten books), it is found that the time spent on books not to borrow in the package is only a very small part of the time required to go back and forth to the warehouse.
Through this example, you can learn about several important aspects of High-speed cache:
- Cache Technology is to use a fast but small memory to increase the speed of slow but large memory.
- When using the cache, you must first check whether there are any required items in the cache. If yes, it is calledCache hit. If not, it is calledCache errorsAt this time, the computer must wait for a round-trip to read a large and slow memory.
- The largest cache is also far smaller than a large storage area.
- Multi-level cache is available. In our example of a librarians, a backpack is a small but fast storage, and a warehouse represents a large and slow storage. This is a level-1 cache. You can also add a level of cache between them, that is, put a shelf behind the counter that can hold one hundred books. The librarians can first view the backpack, then view the bookshelves, and finally view the warehouse. This constitutes a two-level cache.
To enableCache SystemFor more information, see the following section:
- L1 cache-memory access at full speed (10 nanoseconds, 4-16 kilobytes)
- L2 cache-Access to SRAM-type memory (about 20 to 30 nanoseconds,-KB)
- Primary storage-ram-type memory access (about 60 nanoseconds, size: 32-MB)
- Hard Disk-mechanical device, slow (about 12 MS, 1-10 Gigabit bytes)
- Internet-extremely slow (not limited in size between 1 second and 3 days)
2. What are Java cache technologies?
Several famous open-source Java frameworks are introduced.
Oscache is a widely used high-performance J2EE cache framework. Oscache can be used as a common cache solution for any Java application. It has the following features: cache any object. It has a comprehensive API-Oscache, which can be permanently cached and written at will.
Java Caching System (JCs) is a distributed cache system and a server-based Java application. She accelerates dynamic Web applications by providing management of various dynamic cache data. Like other cache systems, JCS is also an application for high-speed reading and vulgar writing.
Ehcache is a pure Java cache in the process. It has the following features: fast and simple. It acts as a pluggable cache for hibernate2.1, with minimal dependencies, comprehensive documentation and testing.
Shiftone is a Java lib that executes strict object cache policies for some columns, just like a Lightweight Framework for configuring cache working states.
Jbosscache is a replication processing cache that allows you to cache enterprise-level application data to better improve performance. cache data is automatically copied, making it easier for you to work in clusters between JBoss servers.
Iii. ehcache usage
Cache plays an important role in high concurrency and high performance website application systems. Here we mainly introduce the use of ehcache.
Ehcache is an open source project from SourceForge (http://ehcache.sourceforge.net/), and is also a simple and fast cache component implemented in pure Java. Ehcache supports memory and disk cache, multiple elimination algorithms including LRU, FIFO, and LFU, and distributed cache. It can be used as a cache plug-in for hibernate. Colleagues can also provide filter-based Cache, which can cache the response content and adopt gzip compression to increase the response speed.
(1) Basic API usage of ehcache
First, we will introduce the cachemanager class. It is mainly responsible for reading the configuration file. By default, it reads ehcache. xml under classpath, and creates and manages cache objects according to the configuration file.
//
Use the default configuration file to create a cachemanager
Cachemanager manager =
Cachemanager. Create ();
//
You can use manager to generate a cache object with the specified name.
Cache cache = manager. getcache ("democache ");
//
Use manager to remove cache objects with the specified name
Manager. removecache ("democache ");
You can call Manager. removalall () to remove all caches. You can disable cachemanager by calling the shutdown () method of manager.
With the cache object, you can perform some basic cache operations, such:
// Add an element to the cache
Element element = new element ("key", "value ");
Cache. Put (element );
// Retrieve elements from the cache
Element element = cache. Get ("key ");
Element. getvalue ();
// Remove an element from the cache
Cache. Remove ("key ");
You can directly use the above API to cache data objects. Note that cache objects must be serializable.
In the following sections, I will introduce the integrated use of ehcache and spring and hibernate.
First, place the ehcache. xml configuration file under classpath. Add the following cachemanager configuration in the spring configuration file:
<Bean id = "cachemanager" class = "org. springframework. cache. ehcache. ehcachemanagerfactorybean">
</Bean>
Configure democache:
<Bean id = "democache" class = "org. springframework. cache. ehcache. ehcachefactorybean">
<Property name = "cachemanager" ref = "cachemanager"/>
<Property name = "cachename">
<Value> democache </value>
</Property>
</Bean>
Next, write an interceptor class that implements the org. aopalliance. Intercept. methodinterceptor interface. With the interceptor, You can selectively configure
Bean method. If the called method is configured as a cache, the interceptor generates a cache key for the method.
And check whether the results returned by this method are cached. If the request has been cached, the cached result is returned. Otherwise, the intercepted method is executed again and the result is cached for the next call. The Code is as follows:
Public class methodcacheinterceptor implements methodinterceptor, initializingbean {private cache; Public void setcache (Cache cache) {This. cache = cache;} public void afterpropertiesset () throws exception {assert. notnull (cache, "a cache is required. use setcache (cache) to provide one. ");} public object invoke (methodinvocation Invocation) throws throwable {string targetname = invocation. getthis (). g Etclass (). getname (); string methodname = invocation. getmethod (). getname (); object [] arguments = invocation. getarguments (); object result; string cachekey = getcachekey (targetname, methodname, arguments); element = NULL; synchronized (this) {element = cache. get (cachekey); If (element = NULL) {// call the actual method result = invocation. proceed (); element = new element (cachekey, (serializable) result); cache. put (El Ement) ;}} return element. getvalue ();} private string getcachekey (string targetname, string methodname, object [] arguments) {stringbuffer sb = new stringbuffer (); sb. append (targetname ). append (". "). append (methodname); If (arguments! = NULL) & (arguments. length! = 0) {for (INT I = 0; I <arguments. length; I ++) {sb. append (". "). append (arguments [I]);} return sb. tostring ();}}
Synchronized (this) code implements the synchronization function. Why must I synchronize data? The get and put operations of the cache object are synchronized. If the cached data comes from the database query, if this synchronization code is not available, when the key does not exist or the object corresponding to the key has expired, in the case of multi-thread concurrent access, many threads will re-execute this method, because it is expensive to re-query the database, and a large number of concurrent queries in an instant will put a lot of pressure on the database server. So the synchronization code here is very important.
Next, complete the configurations of the interceptor and Bean:
<bean id="methodCacheInterceptor" class="com.xiebing.utils.interceptor.MethodCacheInterceptor"><property name="cache"><ref local="demoCache" /></property></bean><bean id="methodCachePointCut" class="org.springframework.aop.support.RegexpMethodPointcutAdvisor"><property name="advice"><ref local="methodCacheInterceptor" /></property><property name="patterns"><list><value>.*myMethod</value></list></property></bean><bean id="myServiceBean"class="com.xiebing.ehcache.spring.MyServiceBean"></bean><bean id="myService" class="org.springframework.aop.framework.ProxyFactoryBean"><property name="target"><ref local="myServiceBean" /></property><property name="interceptorNames"><list><value>methodCachePointCut</value></list></property></bean>
Myservicebean is the bean that implements the business logic. The returned results of mymethod () in the method must be cached. In this way, the mymethod () method of myservicebean will be searched from the cache first, and then the database will be queried. The AOP method greatly improves the system flexibility. By modifying the configuration file, you can cache the method results. All the operations on the cache are encapsulated in the interceptor implementation.
Cachingfilter
Using Spring AOP for integration, You can flexibly cache the returned result objects of methods. Cachingfilter can Cache HTTP Response content. In this way, the granularity of the cached data is relatively coarse, for example, the entire page is cached. It is easy to use and efficient. Its disadvantage is that it is not flexible enough and the degree of reuse is not high.
Ehcache uses the simplepagecachingfilter class to implement filter caching. This class is inherited from cachingfilter and has the calculatekey () method that generates the cache key by default. This method uses the URI of the HTTP request and the query condition to form the key. You can also implement a filter by yourself, inherit from the cachingfilter class, and overwrite the calculatekey () method to generate a custom key. In my project, many pages use Ajax. To ensure that the JS request data is not cached by the browser, each request carries a random number parameter I. If simplepagecachingfilter is used, the keys generated each time are different, and the cache is meaningless. In this case, we will overwrite the calculatekey () method.
To use simplepagecachingfilter, first Add the following configuration in the configuration file ehcache. xml:
<Cache name = "simplepagecachingfilter" maxelementsinmemory = "10000" Eternal = "false" overflowtodisk = "false" timetoidleseconds = "300" timetoliveseconds = "600" timeout = "LFU"/> <! -- The name attribute must be simplepagecachingfilter to modify the web. XML file, add a filter configuration: --> <filter-Name> simplepagecachingfilter </filter-Name> <filter-class> net. SF. ehcache. constructs. web. filter. simplepagecachingfilter </filter-class> </filter> <filter-mapping> <filter-Name> simplepagecachingfilter </filter-Name> <URL-pattern>/test. JSP </url-pattern> </filter-mapping>
Next we will write a simple test. jsp file for testing. The time displayed within 600 seconds will not change every time the cached page is refreshed. The Code is as follows:
<% Out. println (new date (); %>
The data output by cachingfilter is compressed by gzip Based on the accept-encoding header sent by the browser. After the author's test, the compressed gzip data volume is 1/4 of the original data volume, and the speed is 4-5 times of the original data volume,
Therefore, the cache and compression are very effective. When using gzip compression, pay attention to two issues: 1. Filter adopts the system default encoding for gzip compression. For Chinese webpages using GBK encoding,
Set the language of the operating system to zh_cn.gbk. Otherwise, garbled characters may occur. 2. By default, cachingfilter determines whether to perform gzip based on the value of the accept-encoding parameter contained in the request header sent by the browser.
Compression. Although IE6/7 supports gzip compression, this parameter is not included in the request. To perform gzip compression on IE6/7, you can inherit the cachingfilter to implement your own filter. Then, in the specific implementation
The override method acceptsgzipencoding.
Implementation reference:
protected boolean acceptsGzipEncoding(HttpServletRequest request) {final boolean ie6 = headerContains(request, "User-Agent", "MSIE 6.0");final boolean ie7 = headerContains(request, "User-Agent", "MSIE 7.0");return acceptsEncoding(request, "gzip") || ie6 || ie7;}
Use of ehcache in hibernate
Ehcache can be used as the second-level cache of hibernate. Add the following settings to hibernate. cfg. xml:
<! -- Enable the hiberante Level 2 Cache -->
<Property name = "hibernate. cache. use_second_level_cache"> true </property>
<! -- Configure the cache provider -->
<Property name = "hibernate. cache. provider_class"> org. hibernate. cache. ehcacheprovider </Property
Then, add the following format information to each domain of the hibernate ing file that needs to be cached:
<Cache Usage = "read-write | nonstrict-read-write | read-only"/>
For example: <Cache Usage = "read-write"/>
You can also configure the following format information in the hibernate. cfg. xml file:
<Class-Cache Usage = "read-write" class = "cn. wcy. Shop. pojo. Goods"/>
<Class-Cache Usage = "read-write" class = "cn. wcy. Shop. pojo. Category"/>
<! --
Which of the following classes must be configured to support second-level cache? usage = "read-write ":
If read-only is configured, the session: delete update method is invalid. Save and query are not invalid.
Hql statements are not affected by read-only, that is, crud operations can be performed normally.
The current configuration does not apply to hql queries.
-->
Add a cache configuration in the configuration file ehcache. xml.
<Ehcache> <! -- If the memory-level cache is full, the remaining temporary directory will overflow to the hard disk --> <diskstore Path = "Java. Io. tmpdir"/> <! -- Maxelementsinmemory = "10000": the maximum number of objects supported by the memory. If the maximum number of objects is exceeded, the eternal = "false" indicates whether the cached object takes effect permanently. generally configured as falsetimetoidleseconds: Object lifecycle. the default value is second timetoliveseconds: the object activation time. If the object is not accessed within the specified activation time, overflowtodisk is destroyed in advance: whether the disk overflow to memorystoreevictionpolicy is supported: object replacement policy (if the memory is full or the hard disk is full, how to replace the object) first-in-first-out, last-least-use algorithm (time), last-least-use algorithm (frequency) --> <defaultcache maxelementsinmemory = "100" Eternal = "false" timetoidleseconds = "200" timetoliveseconds = "120" overflowtodisk = "false" diskpersistent = "false" leading = "LRU" leading = "120"/> </ehcache>
In addition to functions, ehcache monitors Cache Usage. In actual system operation, we will pay more attention to the memory size occupied by each cache object and the cache hit rate. With this data, we can optimize the cache configuration parameters and system configuration parameters to optimize the system performance. Ehcache provides convenient APIs for us to call to obtain monitoring data. The main methods include:
// Obtain the number of objects in the cache. getsize (); // obtain the memory occupied by the cache object. getmemorystoresize (); // obtain the cache read hits. getstatistics (). getcachehits () // get the number of missed cache reads. getstatistics (). getcachemisses ()
? Distributed cache ehcache supports distributed cache from version 1.2. Distributed cache mainly solves the problem of data synchronization between different servers in the cluster environment. The specific configuration is as follows:
Add the following content to the configuration file ehcache. xml:
<Cachemanagerpeerproviderfactory class = "net. SF. ehcache. Distribution. Exceptions" properties = "peerdiscovery = automation, multicastgroupaddress = 230.0.0.1, multicastgroupport = 4446"/>
<Cachemanagerpeerlistenerfactory class = "net. SF. ehcache. Distribution. rmicachemanagerpeerlistenerfactory"/>
In addition, you need to add
<Cacheeventlistenerfactory class = "net. SF. ehcache. Distribution. rmicachereplicatorfactory"/>
Example: <cache name = "democache" maxelementsinmemory = "10000" Eternal = "true" overflowtodisk = "true"> <cacheeventlistenerfactory class = "net. SF. ehcache. distribution. rmicachereplicatorfactory "/> </cache>
Summary ehcache is an excellent Java-based Cache implementation. It is simple, easy to use, and has complete functions, and can be easily integrated with popular open-source frameworks such as spring and hibernate.
Using ehcache can reduce the access pressure on the database server in the website project, increase the Website access speed, and improve the user experience.
Cache initial solution (1)