Nginx Optimized buffer cache

Source: Internet
Author: User
Tags browser cache nginx server

One problem with reverse proxies is that the performance impact of a server process increases when a large number of users are represented. In most cases, it can be mitigated by the use of Nginx buffering and caching capabilities to a large extent.

When acting on another server, two different connection speeds affect the customer's experience:

The connection from the client to the Nginx proxy.

The connection from the Nginx proxy to the back-end server.

Nginx has the ability to optimize these connections to adjust its behavior.

If there is no buffering, the data is sent from the agent's server and immediately begins to be sent to the customer. If the client is assumed to be fast, buffering can be turned off and the data to the client as soon as possible, the Nginx proxy will temporarily store the back-end response and then supply the data to the client as needed. If the client is slow, allow the Nginx server to close the connection to the backend. It can then handle the data allocated to the client at any possible speed.

Nginx has a buffer design by default, because the client often has a large different connection speed. We can adjust the buffering behavior with the following instructions. Can be set at http,server or location. It is important to keep in mind that the size sizes directive is configured for each request, so adding more than you need will affect your performance if there are many client requests at this time:

Proxy_buffering: This directive controls whether buffering is enabled. By default, its value is "on".

Proxy_buffers: This directive controls the number of Agent response buffers (the first parameter) and the size (the second parameter). The default configuration is 8 buffer size equal to one memory page (4K or 8K). Increasing the number of buffers allows you to buffer more information.

Proxy_buffer_size: The response header buffer size from the backend server, which contains headers, and the other parts of the response are separate. This instruction sets the buffer size for the response portion. By default, it is the same size as proxy_buffers, but since this is used for header information, this can usually be set to a lower value.

Proxy_busy_buffers_size: This instruction sets the maximum size of the callout "client-ready" buffer. While the client can read data from one buffer at a time, the buffer is placed in the queue and sent to the client in bulk. This command controls the size of the buffer space allowed to be in this state.

Proxy_max_temp_file_size: This is the maximum size of the temporary files that can be used on disk per request. These are created when the upstream response is too large to be assembled into a buffer.

Proxy_temp_file_write_size: This is the amount of data that Nginx writes to a temporary file once the response of the proxy server is too large.

Proxy_temp_path: When the upstream server's response is too large to be stored in the configured buffer area, Nginx stores the temporary file hard drive path.

As you can see, Nginx provides quite a number of different instructions to adjust the buffering behavior. Most of the time, you don't have to worry too much, but it may be useful for adjusting some values. Perhaps the most useful adjustments are the proxy_buffers and proxy_buffer_size directives.

An example:,

Proxy_busy_buffers_size 8k;

Proxy_max_temp_file_size 2048m;

Proxy_temp_file_write_size 32k;

Proxy_pass http://example.com;

Configure the Proxy service cache to reduce response time

Although Buffering helps to free back-end servers to handle more requests, Nginx also provides a way to cache the content from the back-end server, without having to connect to upstream for many requests.

Configuring the proxy Cache

To set up the cache for proxy content, we can use the Proxy_cache_path directive. This creates a zone to hold the data returned from the proxy server. The Proxy_cache_path directive must be set in the HTTP context section.

In the example below, we will configure some of the relevant instructions to build our cache system.

# HTTP Context

Proxy_cache_path/var/lib/nginx/cache levels=1:2 keys_zone=backcache:8m max_size=50m;

Proxy_cache_key "$scheme $request_method$host$request_uri$is_args$args";

Proxy_cache_valid 302 10m;

Proxy_cache_valid 404 1m;

With the Proxy_cache_path directive, we should first have defined the directory in the file system where you want to store the cache. In this example, we select the/var/lib/nginx/cache directory. If the directory does not exist, you can create it with the correct permissions and ownership:

sudo mkdir-p/var/lib/nginx/cache

sudo chown Www-data/var/lib/nginx/cache

sudo chmod 700/var/lib/nginx/cache

The levels= parameter specifies how the cache will be organized. Nginx creates a cache key by the value of the hash key (configured below). We chose the above levels to determine the single character directory (which is the last character of the hash value) with a two-character Fu directory (the next two characters taken from the end of the hash value) will be created. You don't usually have to focus on this detail, but it can help nginx quickly find the relevant value.

The keys_zone= parameter defines the name of the cache area, which we call Backcache. This is where we define how much metadata is stored. In this example, we are storing 8 MB of key. For each megabyte, Nginx can store about 8000 entries. The max_size parameter sets the maximum size of the actual cached data.

The other command we use above is proxy_cache_key. This setting sets the key to store the cached value. This key is used to check whether a request can be serviced from a cache. We set it up as a scheme (HTTP or HTTPS), an HTTP request method, and a combination of the requested host and URI.

The proxy_cache_valid instruction can be specified multiple times. It relies on the status code value to allow us to configure how long to store. In our example, we store 10 minutes for back-end returns of 200 and 302, and 404 responses expire one minute.

Now that we have configured the buffers, we still need to tell Nginx when to use the cache.

In our proxy to the backend location, we can configure the use of this cache:

# Server Context

Location/proxy-me {

Proxy_cache Backcache;

Proxy_cache_bypass $http _cache_control;

Add_header X-proxy-cache $upstream _cache_status;

Proxy_pass Http://backend;

}

Using the Proxy_cache directive, you can specify that the Backcache buffer is used for this location. Nginx will check here for entries that are valid for the backend.

The above Proxy_cache_bypass instruction is set to $ http_cache_control variable. This will include an indicator indicating whether the client is explicitly requesting an up-to-date, non-cached version. Setting this directive allows Nginx to properly handle these types of client requests. No further configuration is required.

We've also added extra heads called X-proxy-cache. We set this header to the value of the $ upstream_cache_status variable. This setting header allows us to see if the request causes a cache hit, a cache miss, or the cache is explicitly bypassed. This is especially valuable for debugging and is useful for clients.

Considerations for caching results

Caching can greatly improve the performance of the proxy server. However, it is also necessary to explicitly consider configuring the cache when you want to remember.

First, any user-related data should not be cached. This can cause one user's data to be presented to other users. If your site is completely static, this may not be a problem.

If your site has some dynamic elements, you will have to take this into account. How do you handle backend processing to see what the application or server is processing. For private content, you should set the Cache-control header to "No-cache", "No-sotre", or "private" depending on the nature of the data:

No-cache:

Request: Notifies the cache that the original request must be forwarded exactly, and that any cached person needs to forward the request and verify the cache (if any). The corresponding noun: end-to-end overloading.

Response: Allows the cache to cache the copy. The actual value is that the cache is always forced to check the freshness of the cache. Once you confirm freshness, you can use a cached copy as a response. No-cache, you can also specify a containing field, such as a typical application, No-cache=set-cookie. The result of this is to inform the cache that you do not use cached content for the Set-cookie field. Instead, use new drops. Other content can use the cache

No-store: Indicates that the data received at any time is not cached. This is most secure for private data, because it means that the data must be retrieved from the server every time.

Private: This indicates that the shared cache space cannot cache this data. This can be used to indicate the user's browser cache data, but the proxy server should not consider the subsequent request data to be valid.

Public: This indicates that the response is a common data that can be cached at any point in the connection.

A correlation that can control this behavior header is the Max-age header, which indicates the number of seconds that any resource should be cached.

Depending on the sensitivity of the content, setting these headers correctly will help you take advantage of the caching benefits while keeping your private data secure and keep your dynamic data up to date.

If your backend also uses nginx, you can set up the Max-age to implement Cache-control using outdated instructions:

Location/{

Expires 60m;

}

Location/check-me {

expires-1;

}

In the example above, the first block allows caching of one hours of content. The second block sets the Cache-control header to "no cache". To set other values, you can use the Add_header directive, just like this:

location/private {

expires-1;

Add_header Cache-control "No-store";

}

Nginx Optimized buffer cache

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.