Nginx as a Web Cache Server

Source: Internet
Author: User

Lab environment: Centos6.4IP: 192.168.56.120 sets the file descriptor of the server. If you need to increase the parameter worker_connections When configuring nginx, you need to set the file descriptor of the system, otherwise, the following error may occur: worker_connections exceed open file resource limit: 1024
(1) Modify/etc/security/limits. in the conf file, add the following lines to the file: * soft noproc 65535 * hard noproc 65535 * soft nofile 65535 * hard nofile 65535 limits the maximum number of threads and files of any user to 65535.

* Indicates the number of files opened by all users. The "*" number can be used to modify the limit of all users. soft or hard indicates whether to modify the soft limit or hard limit; 65535 specifies the new limit value to be modified, that is, the maximum number of opened files (note that the soft limit value must be smaller than or equal to the hard limit ). Save the modified file.


(2) modify the/etc/pam. d/login file and add the following lines to the file: session required/lib/security/pam_limits.so

This tells Linux that after the user completes system logon, you should call the pam_limits.so module to set the maximum number of resources that the system can use (including the maximum number of files that a user can open ), the pam_limits.so module starts from/etc/security/limits. read the configuration in the conf file to set these limits. Save the modified file.


(3) Modify/etc/rc. add the following line to the local script: echo "65535">/proc/sys/fs/file-max. After completing the preceding three steps, restart the server, use the ulimit-n command to check whether the maximum file descriptor of the system is the value just set: [root @ localhost ~] # Ulimit-n

65535


Experiment configuration: 1. Install Nginx [root @ localhost ~] # Yum install pcre-devel openssl-devel perl-ExtUtils-Embed gcc-c ++ make wget [root @ localhost src] # wget expose src] # wget http://nginx.org/download/nginx-0.8.32.tar.gz [root @ localhost src] # tar xf ngx_cache_purge-1.0.tar.gz [root @ localhost src] # tar xf nginx-0.8.32.tar.gz [root @ localhost src] # useradd-s/sbin/nologin-M www [root @ localhost nginx-0.8.32] #. /configure -- user = www -- group = www -- add-module = .. /ngx_cache_purge-1.0 -- prefix =/usr/local/nginx -- with-http_stub_status_module -- with-http_ssl_module [root @ localhost nginx-0.8.32] # make & make install Note: nginx may use a lower version. When I use the new version 1.5.x for compilation and installation, an error is returned:Make [1]: *** [objs/addon/ngx_cache_purge-1.0/ngx_cache_purge_module.o] Error 1 make [1]: Leaving directory '/usr/src/nginx-1.5.3'

Make: *** [build] Error 2


2. Configure Nginx [root @ localhost ~] # Mkdir-p/data/proxy_temp_dir [root @ localhost ~] # Mkdir-p/data/web/www [root @ localhost ~] # Mkdir-p/data/proxy_cache_dir

[Root @ localhost ~] # Vim/usr/local/nginx/conf/nginx. conf

User www; worker_processes 8; error_log logs/error. log crit; pid logs/nginx. pid; worker_rlimit_nofile 65535; events {worker_connections 65535;} http {include mime. types; default_type application/octet-stream; Limit 128; Limit 32 k; limit 4 32 k; client_max_body_size 300 m; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; Limit 512 k; proxy_connect_timeout 5; proxy_read_timeout 60; proxy_send_timeout 5; Limit 16 k; proxy_buffers 4 64 k; Limit 128 k; Limit 128 k; gzip on; gzip_min_length 1 k; gzip_buffers 4 16 k; gzip_http_version 1.1; gzip_comp_level 2; gzip_types text/plain application/x-javascript text/css application/xml; gzip_vary on; # Note: proxy_t The paths specified by emp_path and queue must be in the same partition; # Set the name of the Web cache area to cache_one. The memory cache space is 200 MB. cache is cleared once a day. The size of the hard disk cache space is 30 GB. upstream web_proxy_cache {server 192.168.56.113: 80 weight = 1 max_fails = 2 fail_timeout = 30 s; server 192.168.56.114: 80 weight = 1 max_fails = 2 fail_timeout = 30 s;} server {Listen 80; server {listen 80; server_name 192.168.56.120; root/data/web/www; index index.html index.htm; location/{proxy_next_upstream http_502 http_504 error timeout invalid_header; # If the backend server returns errors such as 502, 504, and execution timeout, the request is automatically forwarded to another server in the upstream Server Load balancer pool for failover proxy_cache cache_one; proxy_cache_valid 200 12 h; # set different cache times for different HTTP status codes # combine domain names, Uris, and parameters into the Web cache Key values. Nginx Hashing Based on the Key values, store the cached content to proxy_cache_key in the second-level cache directory. $ Host $ uri $ is_args $ args; proxy_set_header Host $ host; proxy_set_header X-Forwarded-For $ remote_addr; proxy_pass http: // web_proxy_cache; expires 1d;} # used to clear cache, assume that a URL is http: // 192.168.56.120/test.txt and the URL cache can be cleared by accessing http: // 192.168.56.120/purge/test.txt. Location ~ /Purge (/. *) {# Only the specified IP address or IP segment can be set to clear the URL cache allow 127.0.0.1; allow 192.168.56.0/24; deny all; proxy_cache_purge cache_one $ host $1 $ is_args $ args;} location ~. * \. (Php | jsp | cgii )? $ {# Extension. php ,. jsp ,. dynamic applications ending with cgi do not cache proxy_set_header Host $ host; proxy_set_header X-Forwarded-For $ remote_addr; proxy_pass http: // web_proxy_cache;} access_log off ;}}


Check whether the configuration is correct [root @ localhost ~] #/Usr/local/nginx/sbin/nginx-tthe configuration file/usr/local/nginx/conf/nginx. conf syntax is okconfiguration file/usr/local/nginx/conf/nginx. conf test is successful start nginx [root @ localhost ~] #/Usr/local/nginx/sbin/nginx
3. Test the creation of different pages on the web1 and web2 web servers [root @ web1 ~] # Vim/data/web/www/index.html <H1> welcome to web1 </H1> upload an image under the/data/web/www/directory named 123.jpg [root @ web1 ~] # Service httpd start [root @ web2 ~] # Service httpd start access:

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131227/234U042S-0.png "title =" clipboard.png "alt =" 102915308.png"/>


Test whether the proxy can be used normally and used as the web cache [root @ localhost ~] # Curl-dump http: // 192.168.56.120

4. view the HIT status through Logs

1. Enable log

log_format main '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for" ''"$upstream_cache_status"';access_log logs/access.log main;


2. Add the $ upstream_cache_status variable to display the cache status. You can add an http header in the configuration to display the Status location/{proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_cache cache_one; proxy_cache_valid 200 304 12 h; proxy_cache_key $ host $ uri $ is_args $ args; Add_header Nginx-Cache "$ upstream_cache_status ";Proxy_set_header Host $ host; proxy_set_header X-Forwarded-For $ remote_addr; proxy_pass http: // web_proxy_cache; expires 1d ;}
3. Check whether hits is cached in the log. The first access is not cached. The status is-MISS192.168.56.1--[09/Aug/2013: 12: 18: 02 + 0800] "GET/123.jpg HTTP/1.1" 200 4496 "http: // 192.168.56.120/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) chrome/27.0.1453.93 Safari/537.36 ""-"" MISS"The second access is through the cache, And the status is HITS192.168.56.1--[09/Aug/2013: 12: 18: 02 + 0800] "GET/123.jpg HTTP/1.1" 304 0 "http: // 192.168.56.120/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) chrome/27.0.1453.93 Safari/537.36 ""-"" HIT"


4. Calculate the HIT Probability of HIT

[Root @ localhost logs] # Awk '{if ($ NF = "\" HIT \ "") hit ++} END {printf "%. 2f % \ n ", hit/NR * 100} 'access. log51.85%


This article is from the "pmghong" blog, please be sure to keep this source http://pmghong.blog.51cto.com/3221425/1301412

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.