Apache HTTP Server (IV)-Cache

Source: Internet
Author: User

Apache HTTP Server (iv) -- cache three-state RFC2616 HTTP Cache
The HTTP protocol includes built-in support for the inline cache mechanism. The mod_cache module is used to use this mechanism. Unlike the simple dual-state key/value cache (the content will disappear after expiration), the HTTP cache references a mechanism to update the expired content. An HTTP cache item has three statuses:
Fresh: a Fresh cache is returned directly without calling the original server. Stale: Beyond the retention period, you need to contact the original server to check whether the content is still fresh before returning it to the customer. If it expires, You need to obtain the updated content from the original server. After verification, the content is marked as fresh. In some special cases, the server can also return expired content to the customer. For example, the original server returns a 5xx error, or another request is updating the specified content. In these cases, the Warning header is added to the response. Www.2cto.com Non Existent: When the cache is full, it can delete fresh or expired cache to free up space. It can happen at any time. Based on the QuickCacheHandler value (on/off), The mod_cache module can be linked to the server at two possible times:
Quick handler phase: a very early stage after the request is parsed during request processing. If content is found in the cache, it is immediately returned, so that almost all request processing is bypassed. This mode provides the fastest speed, because the main server processing is bypassed. However, it also bypasses authentication and authorization ). Normal Handler phase: occurs in the late stage of request processing after all request phases are completed. This mode provides maximum flexibility, because the buffer may potentially occur on precisely controlled points in the filter chain, and the cached content can be filtered or personalized before it is returned to the user. If the URL is not found in the cache, mod_cache adds a filter to the filter stack, and then moves back to the second level to allow normal request processing to continue. If the content is determined to be cached, the content will be saved for the next service. Otherwise, the content will be ignored. If the expired content is found in the cache, The mod_cache module converts the request to a conditional request. If the original server returns a normal response, the response is cached to replace the expired content. If the original server returns the 304 Not Modified response, the cache is marked fresh again, and the filter returns the cache without saving it. If a VM has many aliases, setting UseCanonicalNames to on will greatly increase the cache hit rate. This is because the host name is used as the cache key. This setting prevents the host from generating different cache items. The content to be cached in the format of www.2cto.com should use the max-age or s-maxage field of the Cache-Control header, or contain an Expires header to declare an explicit retention period. In addition, the server-defined retention period can be overwritten by the Cache-Control header in the customer request. In this case, a shorter retention period is used between requests and responses.
When neither request nor response indicates the retention period, the default retention period is used. It is an hour by default. The CacheDefaultExpire command can modify this value. If the response provides the Last-Modified header but does not provide the Expires header, mod_cache will deduce it based on the control of the CacheLastModifiedFactor command. For local content or remote content without defining the Expires header, mod_expires may be used to adjust the retention period by adding max-age and Expires. You can also use MaxCacheExpires to control the maximum retention period. If the server is designed to make different responses based on different headers in the request, such as supporting multiple languages for the same URL, the HTTP cache mechanism caches various variants of the same page with the same URL. This is done by adding a Vary header to the original service, which specifies which headers are used to differentiate different variants. For example, Vary: negotiate, accept-language, and accept-charset. In this case, mod_cache uses the cached content only when the accept-language and accept-charset match the original request. Multiple variants of the content can exist at the same time. moc_cache uses the corresponding values of the request headers already listed in the Vary header to determine which variants are returned to the customer. Moc_cache relies on backend storage implementation to manage the cache, while moc_cache_disk is used to support the cache to the disk. The module is usually configured as CacheRoot "/var/cache/apache/" CacheEnable disk/CacheDirLevels 2 CacheDirLength 1. It is worth noting that because the cache exists locally, the memory cache of the operating system will also be applied. Therefore, although they exist on disks, if they are frequently accessed, they are likely to be obtained from the memory. The two-state key-value shared object cache Apache HTTP Server provides a low-level shared object cache, which indicates Cache Information for SSL sessions and identity creden。, and uses the socache interface. Each implementation provides additional modules and the following backend: mod_socache_dbm: DBM-based shared object cache; mod_socache_dc: Distributed session cache (discache, Distributed session caching) shared object cache; www.2cto.com mod_socache_memcache: shared object cache based on distributed memory cache; mod_socache_shmcb: shared object cache based on shared memory. The mod_authn_socache module allows the result of identity authentication to be cached to relieve the backend load of authentication. The mod_ssl module uses the socache interface to provide session cache and stapling cache. The special file is cached on a platform where the file system is slow or the file processing time is very long. The option is to pre-load the file to the memory at startup. On an operating system with slow file opening, there is an option to open the file at startup and cache the handle in the memory. These options are helpful for systems that access static files very slowly. Opening a file may cause latency, especially for network file systems. Httpd can avoid this delay by caching the file descriptor opened for a common file. Httpd currently provides an implementation of File-Handle Caching, which is provided by mod_file_cache. It maintains a file descriptor table instead of caching the file content. Use the CacheFile command in the configuration file to specify this method of cache. This command tells httpd to open the file at startup and reuse the file handle for subsequent accesses to the file. For example, CacheFile/usr/local/apache2/htdocs/index.html. Although CacheFile does not cache the file itself, it also means that modification or deletion of files during httpd runtime will not be noticed, the server always uses the file content opened at startup. The fastest way to respond from the system memory. Reading files from a disk or even a remote network slows down in several levels. The operating system cache is automatically completed by the operating system. Mod_file_cache provides the MMapFile command to map static file content to the memory at startup, for example, MMapFile/usr/local/apache2/htdocs/index.html and CacheFile, after httpd is started, file modifications are not noticed. Due to limited memory, do not over-use MMapFile. Each httpd sub-process will copy this memory, so you must make sure that the mapped file cannot be too large, so that the system exchanges memory. For security considerations of www.2cto.com, by default, QuickCacheHandler is set to on. mod_cache cannot know whether the cached content is authorized. As long as the cache has not expired, it will return the cached content. We can use the CacheDisable command or mod_expires to avoid caching. When QuickCacheHandler is set to Off, the completed request processing is executed, while the security model remains unchanged. Because requests sent to end users can get responses from the cache, the cache itself becomes the target of those who want to ugly and interfere with the content. It is important that we have to endure that the cache is always writable for users running httpd. This is the opposite of what we advocate: keeping content unwritable to Apache users. If Apache users make concessions, such as CGI-based defects, the cache may become the target. When mod_cache_disk is used, it is relatively easy to insert and modify cache items. This leads to a more likely risk for Apache users than other types of attacks. When using mod_cache_disk, always remember to perform security updates on httpd. when running the CGI process, try to run suEXEC as a non-Apache user. Www.2cto.com when running httpd as a Cache proxy server, there is a potential danger of being called Cache hosting oning. Attackers can use it to make the proxy server get the wrong content from the original server. For example, if the DNS server that runs httpd is vulnerable to attacks, attackers may be able to control where httpd is connected when content is requested from the original server. Another example is the so-called requet-smuggling Attack. The Vary mechanism allows multiple variants of the same URL to exist side by side. This mechanism can also become a problem. When a well-known header, such as User-Agent, has a large value range under general conditions, the same URL may have tens of thousands or even hundreds of millions of variants. In other cases, you need to modify the URL of a specific resource according to the request. Generally, you need to add a "cachebustering" string to the URL. If the content is declared as cacheable, these cache items will squeeze out valid cache items. The CacheIgnoreURLSessionIdentifiers command can avoid such problems. By yourtommy

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.