1. cache Introduction 1.1 cache objects
Theoretically, each layer of the Web layered design can be cached, and any object in the Web can be cached.
Cache of HTTP request results
Browser cache, proxy cache, proxy cache at the server side, and use filter to cache the request result page
Java object Cache
Cache database query result object
1.2 cache medium [Where to save]
There will be no more than two types of hardware media, memory and hard disk (corresponding to the Application Layer Program, do not consider the register and other issues ). however, we often do not divide the data from hardware. The general division method is to divide the data from a technical perspective, which can be divided into several types,Memory, hard disk files, database.
1.2.1 memory
Putting the cache in the memory is the fastest option. Any program that directly operates on the memory is much faster than operating on the hard disk, but if you need to consider the break down problem, because the data stored in the memory is called non-persistent data, if there is no backup on the hard disk, it is difficult or cannot be restored after the machine is down.
1.2.2 Hard Disk
In general, many cache frameworks use both memory and hard disk resources. For example, when the allocated memory space is full, the user will choose to persistently store the data to the hard disk. of course, you also choose to directly store one copy of data to the hard disk (one copy in the memory, one copy in the hard disk, not afraid to go down ). in addition, other caches directly store data on the hard disk.
1.2.3 Database
When it comes to databases, some people may wonder, aren't we talking about reducing the number of database queries and reducing the pressure on database computing? How can we use the database as the cache medium. this is because there are many types of databases, such as berkleydb, which does not support SQL statements, does not have an SQL engine, but only stores the key and value structures, so the speed is very fast, in contemporary PCs, there are no problems with queries over a dozen times per second.
1.3 hit rate
The hit rate is the ratio of the number of requests cached to the number of times the cache returns the correct results. The higher the proportion, the higher the cache usage.
The hit rate problem is a very important issue in the cache. We all hope that our cache hit rate can reach 100%, but it is often counterproductive. Besides, the cache hit rate is an important indicator to measure the cache validity.
1.4 maximum number of elements saved
The maximum number of elements that can be stored in the cache. Once the number of elements in the cache exceeds this value, the cache clearing policy will be used, setting the maximum element value based on different scenarios may increase the cache hit rate to a certain extent. cache more effectively.
1.5 cache update policy 1.5.1 FIFO [first in first out]
The data first enters the cache will be cleared when the cache space is insufficient (when the maximum element limit is exceeded)
1.5.2 LFU [less frequently used]
Elements that have been used at least for a long time will be cleared. This requires that the cached element has an hit attribute. When the cache space is insufficient, the minimum hit value will be cleared.
1.5.3 LRU [least recently used]
The least recently used is that the cached element has a timestamp. When the cache capacity is full and you need to make room to cache new elements, the elements whose timestamp is the farthest from the current time in the existing cache will be cleared out of the cache.
1.6 local cache vs remote Cache
Local cache: local cache
Remote cache: Remote Cache
The biggest advantage of local cache is that the application and cache are in the same process. The request cache is very fast and does not require network overhead. therefore, for a single application, it is more appropriate to use the local cache when the cache node does not need to notify each other in the case of a cluster or cluster.
However, local cache has some disadvantages. Generally, this cache framework is a local cache. that is to say, as applications follow, multiple applications cannot directly share the cache. In the case of Application Clusters, this problem becomes more obvious, of course, some cache components provide the function for cluster nodes to notify each other of cache updates. However, this function is broadcast or loop update, when the cache is updated frequently, the network I/O overhead will be very high, and the normal operation of the application will be affected in severe cases. in addition, if the cache contains a large amount of data, using local cache means that each application has such a large cache, which is definitely a waste of memory.
In this case, we often choose remote cache ,. in this way, in a cluster or distributed environment, each application can share data in memcached. These applications are directly connected to memcached through the socket and memcached protocol based on the TCP/IP protocol, one app updates the value in memcached, and all applications can get the latest value. although there is a lot of network overhead at this time, this solution is usually more common than localcache broadcast or loop update cache nodes, and the performance is higher than the latter. because only one copy of the data needs to be saved, the memory usage is also improved.
Ii. ibatis high-speed cache introduction 2.1 ibatis high-speed cache concerns
Ibatis cache only focuses on howPersistent LayerCache the query results.
2.2 ibatis help for Cache Management
The advantage of ibatis is thatManage High-speed cache through configuration filesTo help avoid a lot of tedious work caused by manual management of the cache results and their dependencies.
2.3 differences between ibatis high-speed cache and traditional o/RM high-speed cache
The idea of ibatis is to establishSQL statement to objectInstead of creatingDatabase Table to object. Traditional o/RM tools focus on the ing from database tables to objects.
Traditional o/RMThe cache maintains an oId [object identification, object identification] for each object it manages, just as the database needs to manage the uniqueness of each record in its table. This means that,If two different results return the same object, the object will be cached only once..
This is not the case for ibatis. It focuses on the execution results of SQL statements., We will not cache all the results of the ibatis cache based on the uniqueness of objects, regardless of whether the identified objects exist in the cache.
3. Configure the ibatis cache 3.1 cachemodel tag
The cachemodel label is used to configure the ibatis cache. The attributes of the cachemodel label include four
3.1.1 attributes of cachemodel labels
L id [required]
This value is used to specify a unique ID to facilitate the use of the ing statements for the queries configured for the cache model.
L type [required]
This attribute is used to specify the cache type configured for the cache. The valid values include memory lru fifo Oscache. This attribute can also be set to the full qualified name of a custom cachecontroller implementation.
L readonly [Optional]
If the value is true, the cache will only be used as the read-only cache. The feature value of the object read from the read-only cache cannot be modified.
L serialize [Optional]
This attribute is used to specify whether to perform "Deep copy" when reading highly cached content"
Readonly and serialize attributes are often used together.
3.1.2 use readonly and serialize attributes together
Readonly |
Seralize |
Result |
Origin |
True |
False |
Good |
You can retrieve cached objects as quickly as possible. Returns a shared instance of cached objects. improper use may cause problems. |
False |
True |
Good |
You can quickly retrieve cached objects. Returns a deep copy of the cached object. |
False |
False |
Warning |
For this combination, the cache is only related to the session life cycle of the calling thread and cannot be used by other threads. |
True |
True |
Bad |
This combination is consistent with readonly = false & serialize = true, otherwise it has no meaning in semantics. |
Table 1: Use the readonly and serialize attributes together.
[Note] the default combination is readonly = true & serialize = false.
3.2 ibatis cache model type 3.2.1 memory
Memory high-speed cache is a reference-based high-speed cache (refer to Java. Lang. Ref ). Each object in the cache is assigned a reference type, which provides a clue for the garbage collector to guide it on how to process the corresponding object.
The memory cache model is perfect for applications that focus more on memory management policies than on object access policies. With strong, soft, and weak, you can determine which results should be retained for a longer time than other results.
Value |
Meaning |
Weak |
The weak application type will soon discard cached objects. This reference type does not prevent objects from being collected by the garbage collector. It provides only one way to access the cache object. The object will be removed after the first collection by the garbage collector. This is the default reference type of memory. If all objects stored in the cache are accessed in a very consistent way, this type is very suitable. Because high-speed cache objects are discarded quickly, you can ensure that your high-speed cache does not exceed the memory limit. However, when this type of reference is used, the Database Access frequency is very high. |
Soft |
The soft reference type is also suitable for those situations that will meet the memory constraints and will discard the cache objects if necessary. This type of reference will save cached objects as much as possible when the memory constraints are met. At this time, the garbage collector never collects objects unless it is determined that more memory is needed. Soft references also ensure that the memory limit is not exceeded, and its database access frequency is lower than that of the weak type. |
Strong |
No matter the memory constraints, the high-speed cache objects will never be discarded unless the specified clearing interval is reached. Strong type cache should be used to store small static objects, and the access to these small objects should be regular. This reference type can improve performance by reducing the frequency of database access, but there is a risk of memory depletion when there is more and more data in the cache. |
[Table 2] values and meanings of the reference-type attribute of memory cache
3.2.2 LRU
The LRU type cache model uses the least recently used policies to manage the cache. The internal mechanism of the high-speed cache records which objects are used at least recently in the background. Once the high-speed cache size limit is exceeded, they will be discarded. The size limit specifies the number of objects that can be stored in the cache. Avoid placing objects that occupy a large amount of memory in such cache; otherwise, the memory will soon run out.
LRU high-speed cache is not always used for high-speed cache situations that need to be managed based on the Access Frequency of certain objects. This cache policy is usually used in object applications that require high-speed caching for paging results or keyword search results.
3.2.3 FIFO
The FIFO high-speed cache model adopts a first-in-first-out management policy, which is a time-based policy that is used to place objects that are frequently used during initial release and will be accessed less frequently over time. For example, report and report the stock price.
3.2.4 Oscache
To use Oscache, you need to support the Oscache component and introduce the corresponding jar package and configuration file.
3.2.5 custom high-speed cache Model
You only need to implement the com. ibatis. sqlmap. Engine. cache. cachecontroller interface. Set the type to the full-limit class name or its alias during configuration.
3.3 cache cleanup
The flushonexecute and flushinterval labels can be used to define the cache clearing trigger conditions.
Tag Name |
Chinamoocs |
<Flushonexecute> |
Defines the ing statement of the query. Execution of the ing statement will clear the related cache. |
<Flushinterval> |
Defines a time interval in which cache is regularly cleared. |
Table 3: <flushonexecute> and <flushinterval> labels
Attribute |
Description |
Hours (optional) |
The number of hours before cache cleanup |
Minutes (optional) |
The number of minutes before cache cleanup |
Seconds (optional) |
The number of seconds before cache cleanup |
Milliseconds (optional) |
The number of milliseconds before each cache is cleared |
Table 4 <flushinterval> label attributes
3.4 set features of the high-speed cache Mode
Because the high-speed cache model is only a component that can be inserted into the ibatis framework and can even be customized by users, there must be a way to provide arbitrary values for these components. <Property> the tag is used to complete the task.
Attribute |
Description |
Name (required) |
Name of the set feature |
Value (required) |
Value of the set feature |
Table 5: <property> tag attributes