Nginx cache static +TMPFS High Performance CDN Scheme

Source: Internet
Author: User
Tags custom name

Match different URLs to access different backend

If you want to implement a nginx by accessing different categories of URLs to different backend implementations, first, for example, describe the requirements scenario:

The domain name is: www.xxx.com (native IP is 192.168.12.63)
There are 3 directories under the domain name product, cart, goods, these 3 directories are different business, then we intend to access 3 different directory content, respectively, allocated to different backend for processing, such as access to www.xxx.com/product/xxx.html, then we preset is the following 3 categories:

Assign to 127.0.0.1:8081 and 127.0.0.1:8082 when accessing resources under product
Assign to 127.0.0.1:8083 and 127.0.0.1:8084 when accessing resources under the cart
Assign to 192.168.12.34:8081 and 192.168.12.34:8082 when accessing resources under goods
If you want to meet the above requirements, then we need to use the Nginx upstream module and the rewrite if module with

First we need to configure 3 backend to meet 3 different types of business allocation, configured as follows:

The code is as follows Copy Code


#product业务的后端
Upstream product_tomcats{
Server 127.0.0.1:8081 weight=10;
Server 127.0.0.1:8082 weight=10;
}
Back end of #cart business
Upstream cart_tomcats{
Server 127.0.0.1:8083 weight=10;
Server 127.0.0.1:8084 weight=10;
}
Back end of #goods business
Upstream goods_tomcats{
Server 192.168.12.34:8081 weight=10;
Server 192.168.12.34:8082 weight=10;
}


After the upstream is configured, we configure matching different directory access to different backend content, through the IF syntax to achieve according to the different request URL process matching, and then meet the allocation of conditions to the specified backend, if statement to implement the different URL allocation backend configuration function, the specific configuration is as follows:

Note: The $request variable is to obtain the HTTP method (get POST) Browser Access URL HTTP protocol, such as:

The code is as follows Copy Code

get/goods/ 2222.html http/1.1

Location/{
Proxy_cache cache_one;
Proxy_cache_key "$host: $server _port$request_uri" ;
proxy_cache_valid  304 20m;
rewrite/product/([0-9]+) \.html/index.jsp?id=$1 last;
rewrite/cart/([0-9]+) \.html/index1.jsp?id=$1 last;
rewrite/goods/([0-9]+) \.html/index2.jsp?id=$1 last;
#判断是否匹配product通过正则 If the match forwards the request to the Product_tomcats cluster
if ($request ~*. */product/(. *)) {
Proxy_pass http:// Product_tomcats;
}
#判断是否匹配cart通过正则 If the match forwards the request to the cart _tomcats cluster
if ($request ~*. */cart/(. *)) {
Proxy_pass Http://cart_ Tomcats;
}
#判断是否匹配goods通过正则 If the match forwards the request to the goods _tomcats cluster
if ($request ~*. */goods/(. *)) {
Proxy_pass http://goods _tomcats;
}
Proxy_set_header Host $host
Proxy_set_header x-forwarded-for $remote _addr;
Add_header X-cache ' $ Upstream_cache_status from $host ';
}

In this way, different backend cluster groups are configured by upstream, then the regular matching URLs of the If syntax are assigned to different cluster groups to implement different kinds of URL matching access to different backend processing.

Nginx Cache

Mount Memory File System

We use TMPFS memory file system to do cache file system, so that the system without I/O to improve efficiency.
Mount Tmpfs:
Mount TMPFS Custom Name Mount location-T tmpfs-o size= allocation space size

Cases:

The code is as follows Copy Code

Mkdir/tmpfs
Mount My_tmpfs/tmpfs-t Tmpfs-o size=500m


Dynamically adjust TMPFS space size:
Mount Mount Location-o remount,size= size of unallocated space
Cases:
Mount/tmpfs-o remount,size=1024m

Unload a TMPFS that has been mounted

Umount/tmpfs

Configure Nginx Cache

Nginx Cache Status:

The code is as follows Copy Code

MISS
Expired-expired. The request is routed to the back end.
Updating-expired. The old answer will be used because the Proxy/fastcgi_cache_use_stale is being updated.
Stale-expired. Due to Proxy/fastcgi_cache_use_stale, the backend will get an expired response.
HIT

Proxy_cache_key can be used in 2 kinds of collocation method, according to the requirement to define:

1. URL to get through the browser to do key

Domain Name: Port browser complete address (including dynamic parameters, such as: www.xxx.com:80/goods/1.html)

Proxy_cache_key "$host: $server _port$request_uri";

1. Through the real URL address to do key
Domain name: port real request address (not after rewrite)? Parameters
(e.g. www.xxx.com:80/index.jsp?id=111)


Proxy_cache_key "$host: $server _port$uri$is_args$args";

It is recommended to adopt the first type


The following is a configuration of the cached content, for example:

The code is as follows Copy Code

#日志配置
# $upstream The address of the _ADDR request backend
# $upstream _status Request back-end response status
# $upstream _cache_status Cache State
Log_format Main ' $remote _addr-$remote _user [$time _local] '
"$request" $status $bytes _sent '
"$http _referer" "$http _user_agent"
' $gzip _ratio '
' Addr: $upstream _addr-status: $upstream _status-cachestatus: $upstream _cache_status ";
#缓冲代理请求, Proxy_temp_path needs and Proxy_cache_path under one partition
Proxy_temp_path/tmpfs_cache/proxy_temp_path;
#缓存地址为/tmpfs_cache/proxy_cache_path, the cache directory level is divided into 2 levels,
#cache名称为cache_one, the memory cache space size is 500M, automatically clears the cached data that has not been accessed for more than 1 days, and the cache hard disk space is 15G
Proxy_cache_path/tmpfs_cache/proxy_cache_path levels=1:2 keys_zone=cache_one:500m inactive=1d max_size=15g;
server {
Listen 80;
server_name www.xxx.com;
CharSet Utf-8;
Access_log Logs/cache_test.access.log Main;
Error_log Logs/cache_test.error.log warn;
AutoIndex on;
Index index.html;
Location/{
#使用缓存cache_one上面proxy_cache_path中定义的key_zone =cache_one;
Proxy_cache Cache_one;
#缓存key拼接规则: Domain name: Port get URL full address of browser request
Proxy_cache_key "$host: $server _port$request_uri";
#对HTTP状态码200和304的缓存20分钟
Proxy_cache_valid 304 20m;
#获取proxy的真实域名
Proxy_set_header Host $host;
Proxy_set_header x-forwarded-for $remote _addr;
#设置浏览器中的header可查看该页面的缓存状态
Add_header X-cache ' $upstream _cache_status from $host ';
}


Proxy cache configuration Details

Proxy_cache_key

Syntax: Proxy_cache_key line;
Default value: $scheme $proxy_host$request_uri;
Working with fields: HTTP, server, location

directive specifies the cache key that is contained in the cache.

Proxy_cache_key "$host $request_uri$cookie_user";

Note the host name of the server is not included in the cache key by default, and if you use level two domains for your site in different location, you may need to remove the hostname in the cache key:

Proxy_cache_key "$scheme $host$request_uri";

Proxy_cache_methods
Syntax: proxy_cache_methods [get-head POST];
Default value: Proxy_cache_methods get head;
Working with fields: HTTP, server, location

Get/head is used to decorate the statement that you cannot disable Get/head even if you only use the following statement settings:

Proxy_cache_methods POST;


Proxy_cache_min_uses
Grammar: Proxy_cache_min_uses the_number;
Default value: Proxy_cache_min_uses 1;
Working with fields: HTTP, server, location
The number of requests after the answer will be cached, default 1.

Proxy_cache_path
Syntax: Proxy_cache_path path [Levels=number] keys_zone=zone_name:zone_size [Inactive=time] [max_size=size];
Default value: None
Working with Fields: HTTP
directive specifies the cached path and some other parameters, the cached data is stored in the file, and the hash value of the proxy URL is used as the keyword and filename. The levels parameter specifies the number of subdirectories that are cached, for example:

Proxy_cache_path/data/nginx/cache Levels=1:2 keys_zone=one:10m;


The file name is similar to the following:
/data/nginx/cache/c/29/b7f54b2df7773722d382f4809d65029c


You can use any 1-bit or 2-digit number as the directory structure, such as X, x:x, or x:x:x e.g.: "2", "2:2", "1:1:2", but up to three level directories.
All active keys and metadata are stored in a shared memory pool, which is specified with the Keys_zone parameter.
Note that each defined memory pool must be a distinct path, for example:
Proxy_cache_path/data/nginx/cache/one Levels=1 keys_zone=one:10m;
Proxy_cache_path/data/nginx/cache/two Levels=2:2 keys_zone=two:100m;
Proxy_cache_path/data/nginx/cache/three Levels=1:1:2 keys_zone=three:1000m;


If the cached data is not requested within the time specified by the inactive parameter is deleted, the default inactive is 10 minutes.
A process named cache manager controls the cache size of the disk, which is used to remove inactive caches and control cache sizes, as defined in the Max_size parameter, and when the currently cached value exceeds the value specified by Max_size, The least data (LRU substitution algorithm) will be deleted after its size is exceeded.
The size of the memory pool is set in proportion to the number of cached pages, and the metadata size of one page (file) is determined by the operating system, freebsd/i386 64 bytes, and freebsd/amd64 128 bytes.
Proxy_cache_path and Proxy_temp_path should be used on the same file system.


Proxy_cache_valid
Syntax: Proxy_cache_valid reply_code [Reply_code ...] time;

Default value: None
Working with fields: HTTP, server, location
Set different cache times for different answers, for example:

Proxy_cache_valid 302 10m;
Proxy_cache_valid 404 1m;

The set cache time for the answer code 200 and 302 is 10 minutes, and 404 code caches 1 minutes.
If only time is defined:
Proxy_cache_valid 5m;

Then only the responses for Code 200, 301, and 302 are cached.

You can also use any argument for any answer.

Proxy_cache_valid 302 10m;
Proxy_cache_valid 1h;
Proxy_cache_valid any 1m;

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.