Nginx Content Cache

Source: Internet
Author: User

Original address:http://nginx.com/resources/admin-guide/caching/

Nginx Content Cache

This chapter describes how to enable and configure caching responses received from proxied servers. When caching are enabled NGINX saves responses in the cache on the disk and uses them to respond to clients without Proxyin G The requests.

This chapter discusses how to turn on and configure the response that the cache accepts from the proxy server. After the cache is turned on, Nginx returns the response cache on the disk directly to the client and does not need to proxy the same request.

Enabling the Cache of responses
Turn on response caching

To enable caching configure the path to the cache and other parameters using the proxy_cache_path directive. Then place the proxy_cache directive in the context where you want caching to be enabled:

Configure the cache path and other parameters with the proxy_cache_path directive to turn on caching. Then put the proxy_cache directive in the context where you need to open the cache:

HTTP {    ...    Proxy_cache_path/data/nginx/cache keys_zone=one:10m;    server {        Proxy_cache one;        Location/{            proxy_pass http://localhost:8000;}}    }

Note that the proxy_cache_path directive can is specified only on the HTTP level. It has a mandatory parameters:the path on the file system where cached responses would be stored, and the name and size Of the shared memory zone defined by the keys_zone parameter. The same name is specified in the proxy_cache directive.

Note proxy_cache_path This directive can only be specified at the HTTP layer. It has two required parameters: the path the cache responds to stored in the file system, and the name and size of the shared memory space specified by the parameter keys_zone. The name specified in the Proxy_cache directive is the same as the memory space specified by Proxy_cache_path.

The shared memory zone is used to store meta information on cached items. However, its size does not limit the total size of the cached responses. Cached responses themselves is stored with the copy of the meta information in specific files on the file system. You can limit the size of this file storage with the max_size parameter. However, the actual size of the file storage can temporarily exceed this until a process called cache manager che CKS the cache size and removes the least recently used cached responses and their metadata.

The shared memory area is used to store the metadata in the cache entry. However, its size does not limit the total size of the cached response. The cached response itself is stored as a copy of the meta information in the file specified in the file system. You can limit the size of the file memory by using the Max_size parameter. However, the actual size of the file memory also temporarily exceeds this setting until the cache management process checks the cache size and deletes the least recently used response cache and its metadata.

Caching Processes
Caching procedures

There is additional NGINX processes involved in caching, the cache loader and the cache manager.

There are two additional nginx processes involved in caching, cache loader and cache manager.

The Cache Manager is activated periodically to check the state of the cache file storage. In particular, it removes the least recently used data when the size of the file storage exceeds the max_size par Ameter.

The cache manager periodically activates the check cache file presence status. In particular, when the size of the file memory exceeds the value of the max_size parameter, the least recently used data is deleted.

The cache loader is activated only once, right after NGINX starts. It loads the meta information about the previously cached data into the shared memory zone. Loading the whole cache at once may consume a considerable amount of resources and slow Nginx ' s performance during the FIR St minutes. The the cache loader works in iterations configured with parameters of the proxy_cache_path dire Ctive.

The cache loader is activated only once, after Nginx boot. It loads meta-information from previously cached data to the shared memory area. Loading all caches at once in the first minute can consume a lot of resources and slow down Nginx performance. This is why the cache loader works by iterating over the configuration parameters of the Proxy_cache_path directive.

Each iteration lasts no longer than a loader_threshold value specified in milliseconds (by default, 200). During one iteration the cache loader can load no more than the loader_files items S pecified (by default, 100). A pause between iterations is set with loader_sleeps in milliseconds (by default, 50). For example, these parameters can is modified to speed up loading of the cache meta data:

Each iteration is not over the value of the parameter Loader_threshold, which is in milliseconds (default 200). Each iteration of the cache loader can load no more than the Loader_files key set (default 100). The interval of two iterations is specified in Loader_sleeps, in milliseconds (default 50). For example, these parameters can be modified to speed up the loading of cached metadata:

Proxy_cache_path/data/nginx/cache keys_zone=one:10m                 loader_threshold=300 loader_files=200;

Specifying which requests to Cache
Set up requests that require caching

By default, NGINX caches all responses that has the GET and HEAD methods The first time such responses is received from a proxied server. As a key identifier of a request NGINX uses the request string. Whenever-requests has the same key they is considered equal and the same cached response was sent to the client. The proxy_cache_key directive defines the "the" is calculated for a request and can being changed on the Locat ion, server, or HTTP level:

By default, Nginx caches all responses with get and head methods when these responses are first received from the server being proxied. Nginx uses the request string as the identity key for a request. Any time a two request has the same key is considered to be the same and returns the same cached response sent to the client. The proxy_cache_key directive defines a method for calculating a request key that can be modified at location, server, and http:

Proxy_cache_key "$host $request_uri$cookie_user";

It is possible to increase the minimum number of times a request with the same key should being cached by using the proxy _cache_min_uses directive:

Use the proxy_cache_min_uses instruction to set how many times a key is requested to be cached:

It is also possible to specify additional HTTP methods of the requests to cache:

You can also configure other HTTP methods for the request to be cached:

Proxy_cache_methods GET HEAD POST;

This setting enables caching of responses for requests that has the GET, HEAD, or POST method.

The above settings enable caching of responses to requests with the, or post methods.

limiting or bypassing Caching
Limit or bypass caching

By default, the time which a response is cached isn ' t limited. When the cache file storage was exceeded it would be removed if it had been used less than other cached items. Otherwise, the response can be kept in the cache indefinitely.

By default, there is no limit to the length of a response cache. A cached response is deleted when the cache file memory exceeds the set size and is less than the number of other response caches used. Otherwise, the response will be saved indefinitely in the cache.

You can limit the time which responses with specific status codes is considered valid, by using the proxy_cache_valid directive:

Use the proxy_cache_valid directive to limit the validity period of a response for a specified state:

Proxy_cache_valid 302 10m;proxy_cache_valid 404      1m;

In this example the responses with the 302 code would be a valid in the cache for ten minutes, and the responses with The 404 code'll is valid for 1 minute. To set the storage time limit applied all status codes, specify any in the first parameter:

In this example, the response with the 200 and 302 status codes is valid for 10 minutes in the cache, and 404 for 1 minutes. If you want to set the storage time for all status codes, the first parameter is specified as any:

Proxy_cache_valid any 5m;

To define conditions when the response was not taken from the cache (even if it could exist in the cache) use the Proxy_c Ache_bypass directive. Keep in mind that no conditions is specified by default. The directive may have one or several parameters and each may consist of a number of variables. If at least one parameter are not empty and does not equal "0", NGINX won't look up the response in the cache. For example:

The proxy_cache_bypass directive defines a condition that does not take a response from the cache, even if the response is in the cache. Remember that the default is no condition. This command can be multiple parameters, each consisting of a series of values. Nginx will not look for a response in the cache as long as one parameter is not empty or is not "0", for example:

Proxy_cache_bypass $cookie _nocache $arg _nocache$arg_comment;

To define conditions where the response are not saved in the cache at all, use the proxy_no_cache directives. The conditions is specified by the same rules as for Proxy_cache_bypass:

Specifies that conditions that do not store responses use the proxy_no_cache directive, which has the same conditional rules as Proxy_cache_bypass:

Proxy_no_cache $http _pragma $http _authorization;

Combined Configuration Example
Composite Configuration Example

The configuration sample below combines some of the different caching options described above.

The following configuration example combines some of the different cache options described above:

HTTP {    ...    Proxy_cache_path/data/nginx/cache keys_zone=one:10m                     loader_threshold=300 loader_files=200                     max_size=200m;    server {        listen 8080;        Proxy_cache one;        Location/{            proxy_pass http://backend1;        }        Location/some/path {            proxy_cache_valid any   1m;            Proxy_cache_min_uses 3;            Proxy_cache_bypass $cookie _nocache $arg _nocache$arg_comment;            Proxy_pass Http://backend2;}}}    

This example defines a virtual server with the locations, the same cache but with different settings.

This example defines a virtual service with two location with the same cache but with different configurations.

It is assumed this responses from the backend1 server rarely change and can being cached the first time a request is Received and held for as long as possible.

This example assumes that the response from the BACKEND1 server is rarely changed, so the response of the first request can be cached for as long as possible.

By contrast, responses from the Backend2 server is highly volatile, and therefore is cached only after three OC Currences of the same request and held for one minute. Moreover, if a request satisfies the conditions of the proxy_cache_bypass directive, the cache would not be search Ed for the response @ all and NGINX would immediately pass the request to the backend.

In contrast, the response from the BACKEND2 server is very variable, so it is cached and saved only for one minute after three identical requests occur. In addition, if a request satisfies the conditions of the proxy_cache_bapass instruction, Nginx does not look for its response in the cache but sends the request directly to the backend.




Nginx Contents Cache Nginx Content Caching

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.