Nginx caching cache configuration scenarios and related memory footprint problem resolution _nginx

Source: Internet
Author: User
Tags memcached

5 Scenarios for Nginx cache caching
1, one of the traditional cache (404)
This approach is to nginx 404 errors to the back end, and then use Proxy_store to save the back-end page.
Configuration:

Location/{
root/home/html/; #主目录
expires 1d; #网页的过期时间
error_page 404 =200/fetch$request_uri; #404定向到/fe tch Directory
location/fetch/{#404定向到这里
internal; #指明这个目录不能在外部直接访问到
expires 1d; #网页的过期时间
 alias/htm l/;
 Proxy_store will save the file in this directory
proxy_passhttp://www.jb51.net/; #后端upstream地址,/fetch is also a proxy
Proxy_set_header Accept-encoding '; #让后端不要返回压缩 (gzip or deflate) content, save the compressed content will cause a fuss.
proxy_store on; #指定nginx将代理返回的文件保存
proxy_temp_path/home/tmp; #临时目录, this directory will be in the same hard disk partition as/home/html

}

Use the time also have to note is nginx to have permissions to/home/tmp and/home/html under the permission to write files, under Linux Nginx will generally be configured to nobody users to run, so that these two directories will be Chown nobody, Set to nobody user-specific, of course, can also chmod 777, but all experienced system administrators will recommend not to use 777 casually.
2. Traditional cache bis (!-E)
principle and 404 jump basically consistent, but more concise some:

Location/{
root/home/html/;
Proxy_store on;
Proxy_set_header accept-encoding ';
proxy_temp_path/home/tmp;
if (!-f $request _filename)
{
proxy_passhttp://www.jb51.net/;
}
}

You can see that this configuration than 404 saved a lot of code, it is used to determine the requested file!-f in the file system does not exist, does not exist on the proxy_pass to the back end, return is also used Proxy_store save.
Both traditional caches have essentially the same advantages and disadvantages:
Disadvantage 1: Dynamic links with parameters, such as read.php?id=1, are not supported, because Nginx only saves the file name, so this link is only saved under the file system as read.php, so that when the user accesses the read.php?id=2, it returns incorrect results. At the same time does not support http://www.jb51.net/this form of home and level two directory http://www.jb51.net/download/, because Nginx is very honest, will be such a request to write to the file system link, and this link is clearly a directory, So the save failed. These situations require written rewrite to be saved correctly.
Disadvantages 2:nginx There is no mechanism for cache expiration and cleanup internally, these cached files will be permanently stored on the machine, and if there are many things to cache, it will burst the entire hard disk space. To do this, you can use a shell script to clean up regularly, and you can compose dynamic programs such as PHP to do real-time updates.
Disadvantage 3: Only 200 status code can be cached, so the back-end return 301/302/404 and other status code will not be cached, if there is a large number of access to the pseudo static link is deleted, it will continue to penetrate the back end of the load is not small pressure.
Disadvantage 4:nginx will not automatically select memory or hard disk as storage media, all by configuration, of course, in the current operating system will have the operating system-level file caching mechanism, so there is no need to worry about large concurrent reading caused by IO performance problems.
The disadvantage of Nginx traditional caching is that it is different from caching software such as squid, so it can also be considered as its advantage. In production applications it is often used as a partner with squid, squid for the link is often unstoppable, and nginx can stop its access, such as: http://jb51.net/? and http://jb51.net/on squid will be treated as two links, So there will be two penetration, and Nginx will only save once, whether the link becomes http://jb51.net/?1 or http://jb51.net/?123, can not through nginx cache, thus effectively protect the back-end host.
Nginx will be very honest to save the link form to the file system, so that for a link, you can easily check its cache state and content on the machine, it can easily and other file managers such as rsync, such as the use of, it is completely a file system structure.
Both of these traditional caches can be stored in Linux under the/DEV/SHM, I usually do so, so that you can use the system memory to do the cache, the use of memory, cleaning up the expiration of the content will be much faster. In addition to using/dev/shm/to point the TMP directory to the/DEV/SHM partition, if there are a large number of small files and directories, but also to modify the memory partition of the number of inode and maximum capacity:
 

Mount-o size=2500m-o nr_inodes=480000-o Noatime,nodiratime-o REMOUNT/DEV/SHM

The above command is used on a machine with 3G memory, because the/DEV/SHM default maximum memory is half of the system memory is 1500M, this command will turn it up to 2500M, while the number of SHM system inode may not be enough by default, but it is interesting that it can be adjusted arbitrarily, The adjustment here is 480000 conservative, but it's basically enough.
 3, based on the memcached cache
Nginx has support for memcached, but the function is not particularly strong, performance is still very good.
 

location/mem/{
if ($uri ~ "^/mem/([0-9a-za-z_]*) $")
{
set $memcached _key "$";
Memcached_pass  192.168.1.2:11211;
}
expires;
}

This configuration will indicate the HTTP://JB51.NET/MEM/ABC to the memcached ABC key to fetch the data.
Nginx does not currently have any mechanism to write to the memcached, so writing data to memcached is done in a background dynamic language that can be directed to the back end to write data using 404.
 4, based on Third-party Plug-ins Ncache
Ncache is a good project developed by the Sina Brothers, it uses Nginx and memcached to achieve a part of the function similar to squid cache, I do not use this plug-in experience, you can refer to:
http://code.google.com/p/ncache/
5, Nginx New development of Proxy_cache function
starting from the nginx-0.7.44 version, nginx support similar squid more formal cache function, is still in the development phase, support is very limited, this cache is the link with MD5 encoded hash after saving, so it can support any link, but also support 404/ 301/302 such a non-200 state.
Configuration:
First configure a cache space:

Copy Code code as follows:

Proxy_cache_path/path/to/cache levels=1:2 keys_zone=name:10m inactive=5m max_size=2m clean_time=1m;


Note that this configuration is outside the Server tab, levels specifies that the cache space has a two-layer hash directory, the first level is 1 letters, the second layer is 2 letters, the saved file name will be similar/path/to/cache/c/29/ B7f54b2df7773722d382f4809d65029c;keys_zone a name for this space, 10m refers to the space size of 10mb;inactive 5m refers to the cache default length of 5 minutes; max_ The 2m of size means that a single file is not cached when it is over 2m; clean_time specifies a minute to clean the cache.

Location/{
proxy_passhttp://www.jb51.net/;
Proxy_cache NAME; #使用NAME这个keys_zone
proxy_cache_valid 302 1h; #200和302状态码保存1小时
proxy_cache_valid. 1d;# 301 Status Code Save one day
proxy_cache_valid any 1m; #其它的保存一分钟
}

PS: Support Cache 0.7.44 to 0.7.51 the stability of the several versions of the problem, access to some links will be wrong, so these versions are best not in the production environment to use. The more stable version currently known under nginx-0.7 is 0.7.39. Stable version of the 0.6.36 version is also a recent update, if the configuration is not used to 0.7 of some new tags new features, you can also use the 0.6.36 version.

General solution for memory footprint of Nginx cache
1, some days ago a service was brushed, up to millions of requests per minute, then adopted a nginx cache to solve, but because a service can not cache too long, then set 5s, the problem is to create a large number of small files, and quickly deleted.

2, through

Free-m

You will find that the used is 27G, but the memory from the top view process is not that much

Where did the memory go?

3, through the access to information will be found (Cat/proc/meminfo)
slab:22464312 KB
sreclaimable:16474128 KB (These are the cached inode and dentry that the kernel maintains but can release)
sunreclaim:5990184 KB

4. Why does this memory not automatically clean up?
Machine system version of a computer room: Linux 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 03:15:09 UTC 2013 x86_64 x86_64 x86_64 gnu/linux (normal, no memory fast to 100% love Conditions
Machine system version of a computer room: Linux 2.6.32-279.el6.x86_64 #1 SMP Fri June 12:19:21 UTC (no release) x86_64 x86_64 x86_64

5, by setting the following parameters to set the memory threshold

Sysctl-w vm.extra_free_kbytes=6436787
sysctl-w vm.vfs_cache_pressure=10000

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.