Use nginx proxy_cache for website cache

Source: Internet
Author: User
Tags sendfile
Why do we need to use webcache? I think the most important thing is to solve the traffic pressure. With the increase in website traffic, if a single machine processes both static files and dynamic scripts, it is obviously difficult to increase the efficiency and cannot handle the increasing traffic pressure. At the same time, the page content of some websites... information & why do we need to use web cache? I think the most important thing is to solve the traffic pressure. With the increase in website traffic, if a single machine processes both static files and dynamic scripts, it is obviously difficult to increase the efficiency and cannot handle the increasing traffic pressure. At the same time, the page content of some websites does not change frequently. Therefore, we can organize websites in two layers. Front-end web cache + back-end web server

 

Front-end web cache can be implemented in multiple ways. The principle is that the results page of the request is static and a timeout period is set. after the cache page expires, when a new request arrives, it is returned to the backend web server for content updates; squid is a popular method before nginx, But squid cannot fully utilize the multi-core features of the processor. more and more websites use nginx as front-end web cache.

 

To use the nginx cache function, ensure that the proxy module is added to nginx. You can use the-V option (the uppercase V and lowercase v are the version number) to view the nginx compilation parameters. I use the default parameter compilation, as shown below:

 

Root @ SNDA-172-17-12-117:/usr/local/nginx #./nginx-V
Nginx version: nginx/1.2.3
Built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5. 1)
Tls sni support enabled
Configure arguments: -- sbin-path =/usr/local/nginx -- conf-path =/usr/local/nginx. conf -- pid-path =/usr/local/nginx. pid -- with-http_ssl_module -- with-pcre =/usr/local/src/pcre-8.21 -- with-zlib =/usr/local/src/zlib-1.2.7

 

All modules of nginx must be added during compilation and cannot be dynamically loaded during runtime. the modules included in the default compilation options should be closed if they are not displayed.

The default modules installed by nginx are as follows:

Module Name Description Version How to disable
Core Control ports, locations, error pages, aliases, and other essentials.   -- Without-http
Access Allow/deny based on IP address.   Without-http_access_module
Auth Basic Basic HTTP authentication.   Without-http_auth_basic_module
Auto Index Generates automatic directory listings.   Without-http_autoindex_module
Browser Interpret "User-Agent" string. 0.4.3 Without-http_browser_module
Charset Recode web pages.   Without-http_charset_module
Empty GIF Serve a 1x1 image from memory. 0.3.10 Without-http_empty_gif_module
FastCGI FastCGI Support.   Without-http_fastcgi_module
Geo Set config variables using key/value pairs of IP addresses. 0.1.17 Without-http_geo_module
Gzip Gzip responses.   Without-http_gzip_module
Headers Set arbitrary HTTP response headers.  
Index Controls which files are to be used as index.  
Limit Requests Limit frequency of connections from a client. 0.7.20 Without-http_limit_req_module
Limit Zone Limit simultaneous connections from a client. Deprecated in 1.1.8, use Limit Conn Instead. 0.5.6 Without-http_limit_zone_module
Limit Conn Limit concurrent connections based on a variable.   Without-http_limit_conn_module
Log Customize access logs.  
Map Set config variables using arbitrary key/value pairs. 0.3.16 Without-http_map_module
Memcached Memcached support.   Without-http_memcached_module
Proxy Proxy to upstream servers.   Without-http_proxy_module
Referer Filter requests based onRefererHeader.   Without-http_referer_module
Rewrite Request rewriting using regular expressions.   Without-http_rewrite_module
SCGI SCGI protocol support. 0.8.42 Without-http_scgi_module
Split Clients Splits clients based on some conditions 0.8.37 Without-http_split_clients_module
SSI Server-side encryption Des.   Without-http_ssi_module
Upstream For load-balancing.   -- Without-http_upstream_ip_hash_module (ip_hash directive only)
User ID Issue identifying cookies.   Without-http_userid_module
UWSGI UWSGI protocol support. 0.8.40 Without-http_uwsgi_module
X-Accel X-Sendfile-like module.  

Proxy_pass and proxy_cache are commonly used commands in the proxy module.

The web cache function of nginx is mainly completed by the proxy_cache, fastcgi_cache instruction sets, and related instruction sets. the proxy_cache instruction is used to reverse proxy the static content of the backend server, fastcgi_cache is mainly used to process the FastCGI dynamic process cache (I am not very clear about the differences between the two commands here, as if they are all functional, especially the meaning of the following sentence, which I translated ).

After the proxy module is installed, set the nginx configuration file. The key part is shown in the highlighted red font.

This is my nginx. conf configuration file.

User www-data;
Worker_processes 1;

# Error_log logs/error. log;
# Error_log logs/error. log notice;
# Error_log logs/error. log info;

# Pid logs/nginx. pid;
Events {
Worker_connections 1024;
}
Http {
Include mime. types;
Default_type application/octet-stream;

Log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request "'
'$ Status $ body_bytes_sent "$ http_referer "'
'"$ Http_user_agent" "$ http_x_forwarded_for" "$ host "';

# Access_log logs/access. log main;

Sendfile on;
# Tcp_nopush on;

# Keepalive_timeout 0;
Keepalive_timeout 65;

# Compression Settings
Gzip on;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_proxied any;
Gzip_min_length 1100;
Gzip_buffers 16 8 k;
Gzip_types text/plain text/css application/x-javascript text/xml application/xml + rss text/javascript;
# Some version of IE 6 don't handle compression well on some mime-types,
# So just disable for them
Gzip_disable "MSIE [1-6]. (?!. * SV1 )";
# Set a vary header so downstream proxies don't send cached gzipped
# Content to IE6
Gzip_vary on;
# End gzip

# Cache begin
Proxy_buffering on;
Proxy_cache_valid any 10 m;
Proxy_cache_path/data/cache levels = keys_zone = my-cache: 8 m max_size = 1000 m inactive = 600 m;
Proxy_temp_path/data/temp;
Proxy_buffer_size 4 k;
Proxy_buffers 100 8 k;
# Cache end

# Basic reverse proxy server ##
# Apache (vm02) backend for www.example.com ##
Upstream apachephp {
Server www.quancha.cn: 8080; # Apache1
}

# Start www.quancha.cn ##
Server {
Listen 80;
Server_name * .quancha.cn;

Access_log logs/quancha. access. log main;
Error_log logs/quancha. error. log;
Root html;
Index index.html index.htm index. php;

# Send request back to apache1 ##
Location /{
Proxy_pass http: // apachephp;
Proxy_cache my-cache;
Proxy_cache_valid 200;

 

# Proxy Settings
Proxy_redirect off;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
Proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
Proxy_max_temp_file_size 0;
Proxy_connect_timeout 90;
Proxy_send_timeout 90;
Proxy_read_timeout 90;
Proxy_buffer_size 4 k;
Proxy_buffers 4 32 k;
Proxy_busy_buffers_size 64 k;
Proxy_temp_file_write_size 64 k;
# End Proxy Settings
}
}
# End www.quancha.cn ##

}

 

Commands starting with proxy_in the configuration file can be understood literally. Note that the directories set by proxy_cache_path and proxy_temp_path must be in the same partition because they are hard-linked.

Finally, start nginx to welcome the exciting moment. I can't wait. If there is a problem with the article or you are in trouble, leave a message to let me know.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.