Nginx comes with cache
Nginx has a separate process to scan the cache files on the disk and establish a cache index in memory. And there is a management process to make the cache out of date judgment, update and other operations
Definition: can only be used in HTTP segments
Proxy_cache_path/dev/shm/nginx/cache levels=1:2 keys_zone=one:10m inactive=5m loader_sleep=1m max_size=200m;
/dev/shm???????????????? #内存系统, so the cache will be faster.
Level???????????????????? #一般最多三级, this specifies level two, the first level is named after one character, and the second level has two characters.
Keys_zone???????????? #存储在内存中的元数据的大小
Max_size???????????????? #存在shm的内容的大小, which is the size of the cached data
Inactive???????????????? #如果缓存在指定时间内没有被访问, force the update
Loader_time???????????? #每隔指定直接更新内存缓存的索引
Use: Generally used on the front end. The backend makes a upstream, so the cache works better
Location/{
root HTML;
Index index.html index.htm;
Proxy_pass HTTP://WXL;
Proxy_cache one;???????????????????????? #使用刚定义的key_zone
Proxy_cache_valid 1m;???????????????? #成功响应的缓存时间1分钟
}
The cached content is probably like this.
# CAT/DEV/SHM/NGINX/CACHE/8/C5/8F800960E4CA2D295469EE9EFA440C58
key:http://wxl/
http/1.1 OK
Date:sat, 02:54:16 GMT
server:apache/2.2.15 (Red Hat)
Last-modified:sat, 02:49:51 GMT
ETag: "68cc-14-5296a92e464c3"
Accept-ranges:bytes
Content-length:20
Connection:close
content-type:text/html; Charset=utf-8
?
Server3.example.com
?
Based on Memcache cache
For commonly used data, it can also be memcache in a slow. Good performance, for the general scene is a better choice
Install MEMCACHD service: Yum install memcached
I followed the Python connection operation Memcache, so I installed the connector: Yum Install Python-memcached.noarch
server {
???? Listen 80;
server_name www.wxl-dede.com;
?
???? Location/{
???? root HTML;
???? Set $memcached _key "$uri";????????????????
???? Memcached_pass 127.0.0.1:11211;
???? Memcached_connect_timeout 5s;
???? Memcached_read_timeout 5s;
???? Memcached_send_timeout 5s;
???? Memcached_buffer_size 32k;
???? Error_page 404 502 504 = @fallback;
????}
???? Location @fallback {
???? Proxy_pass HTTP://WXL;
????}
}
Explanation of some directives:
Memcached_pass address[:p Ort]/upstream;???????? #连接memcache
Memcached_connect_timeout time;???????????????? #连接超时时间
Memcached_read_timeout 5s;???????????????????? #nginx服务器向mc发出两次写请求之间的等待时间, if no data is transferred during that time, the connection will be closed
Memcached_read_timeout 5s;???????? #两次读请求之间
Memcached_buffer_size 32k;???????????? #nginx接收mc数据的缓冲区大小
?
Here we use a picture to do the test
>>> f = open ("/root/p.jpg")
>>> F=f.read ()
>>> mc.add ('/pic ', f)
Visit: Http://www.wxl-dede.com/pic
Access other connections go directly to other fallback
It is important to note that Nginx is just reading data, but writing data is done by a backend program. Nginx has other modules to support Nginx in the MC operation of data, such as: Memc_nginx and Srcache_nginx solutions, not discussed here first.
Nginx caching [Proxy cache, Memcache]