Nginx reverse proxy and Cache

Source: Internet
Author: User
Tags sendfile nginx reverse proxy

LVS + keepalive + nginx (RealServer) Two + Tomcat (backend server). The nginx configuration file nginx. conf is as follows:

User nobody Nobody; worker_processes 12; error_log/var/log/nginx/error. log crit; (cancel logging error logs) # error_log/var/log/nginx/debug. log debug_http; # error_log logs/error. log; # error_log logs/error. log notice; # error_log logs/error. log Info; PID/var/run/nginx. PID; worker_rlimit_nofile 65535; (maximum number of file descriptors that a process can open) Events {use epoll; worker_connections 65535;} HTTP {include mime. types; default_type application/octet-stre Am; gzip on; # enable gzip compression gzip_buffers 4 8 K; # specify the number and size of cache compression responses gzip_comp_level 6; # specify the compression level, the value ranges from 1 to 9, and 1 is to minimize compression (fast processing) gzip_min_length 1 K; # sets the minimum request to be compressed, in bytes. Requests smaller than this value will not be compressed gzip_http_version 1.1; # Whether to enable gzip Compression Based on the HTTP request version # gzip_proxied expired no-Cache no-store private auth any off; gzip_proxied any; # enable compression for all requests: gzip_types text/plain application/X-JavaScript text/CSS application/XML; # enable compression for MIME types other than "text/html, "text/html" will always be compressed gzip_vary off; # Close the Response Header "vary: Accept-encoding" sendfile on; # sendfile () the function copies data between one file descriptor and the other, because this copy process completes tcp_nodelay on in the kernel state; # This command specifies whether to use the socket tcp_nodelay option, this option is only valid for the keep-alive connection keepalive_timeout 60; # The first value of the parameter specifies the timeout time for the long connection between the client and the server. After this time, the server will close the connection to tcp_nopush on; # This command specifies whether to use the socket tcp_nopush (FreeBSD) or tcp_cork (Linux) options. This option is only valid for server_names_hash_bucket_size 128 When sendfile is used; # server name hash table size of each page box. The default value of this command depends on the CPU cache client_header_buffer_size 32 K; # The command specifies the buffer size of the client request header large_client_header_buffers 8 32 K; # specify the number and size of the buffer used by some relatively large request headers on the client. client_max_body_size 8 m; # specify the maximum Request Entity size allowed for client connection, it appears in the Content-Length Field client_body_buffer_size 128 K in the Request Header; # This command can specify the buffer size of the Connection Request Entity client_body_temp_path/tmp/client_temp 1 2; # command specifies the temporary file path open_file_cache max = 65535 inactive = 30 s; # This command specifies whether the cache is enabled. If yes, the following information of the file is recorded # Max-specifies the maximum number of cached files. If the cache overflows, the least recently used files (LRU) will be removed # inactive-specifies the time when the cached files are removed, if the file is not downloaded during this period, the default value is 60 seconds.) open_file_cache_valid 60 seconds; # This command specifies when to check the effective information of the cached project in open_file_cache. open_file_cache_min_uses 1; # This command specifies the minimum number of files that can be used within a certain time range of invalid parameters of the open_file_cache command. If a larger value is used, the file descriptor is always on in the cache status server_name_in_redirect on; # If this command is enabled, nginx uses the basic server name specified by SERVER_NAME as the redirection address. If it is disabled, nginx will use server_tokens off in the Request Host Header; # Whether to output nginx version information index index.html index.htm index on the error page and server header. JSP index. PHP ######## begin FastCGI ###### fastcgi_connect_timeout 30; # specify the connection timeout time of the same FastCGI server. The value cannot exceed 75 seconds of fastcgi_send_timeout 30; # The command sets fastcgi_read_timeout 30 for the upstream server to wait for a FastCGI process to send data; # The response timeout time of the front-end FastCGI server, if there are some FastCGI processes that have been output for a long time until they are finished, or the frontend Server Response timeout error occurs in the error log, you may need to adjust this value to fastcgi_buffer_size 64 K; # This parameter specifies the buffer size to be used to read the FastCGI process arrival Response Header fastcgi_buffers 4 64 K; # This parameter specifies the response from the FastCGI process, read fastcgi_busy_buffers_size 128 K from the local buffer; # It is generally set to twice fastcgi_buffer_size fastcgi_temp_file_write_size 128 K; ####### end FastCGI ############## begin proxy ###### proxy_redirect off; proxy_connect_timeout 20; # specify the timeout time for a connection to the proxy server, in seconds. Note that this time should not exceed 75 seconds proxy_send_timeout 30; # Set the timeout time for the proxy server to forward requests, the Unit is seconds proxy_read_timeout 30; # determines the timeout time for reading the response from the backend server, measured in seconds. It determines how long nginx will wait to obtain a request response proxy_buffer_size 32 K; # Set the buffer size of the first response read from the proxy server proxy_buffers 32 64 K; # set the number and size of the buffer used to read the response (from the proxy server) proxy_busy_buffers_size 64 K; # This command is twice the proxy_buffer_size proxy_pass_header set-cookie; # This command allows you to forward some hidden header fields fastcgi_pass_header set-cookie for the response; proxy_temp_path/www/Cache/proxy_temp; # similar to the client_body_temp_path command in the HTTP core module, specify an address to buffer large proxy requests fastcgi_temp_path/www/Cache/fastcgi_temp; # The command specifies the path of the temporary data file transmitted from another server. You can also specify the level-3 directory that has been hashed to store fastcgi_cache_path/www/Cache/cache_t2 levels = keys_zone = cache_t2: 200 m inactive = 1D max_size = 3G; proxy_cache_path/www/Cache/cache_t1 levels = keys_zone = cache_t1: 200 m inactive = 1D max_size = 3G; proxy_cache_path/www/Cache/www.test2.com levels = keys_zone = Test2: 200 m inactive = 7d max_size = 3G; proxy_cache_path/www/Cache/www.test1.com levels = keys_zone = test1: 200 m inactive = 7d max_size = 3G; proxy_cache_path/www/Cache/www.test.com levels = keys_zone = test: 200 m inactive = 7d max_size = 3G; ####### end proxy ####### log_format access_sinoicity '$ remote_addr | $ remote_user | [$ time_local] | $ request | ''$ status | $ body_bytes_sent | $ http_referer | ''$ http_user_agent | $ http_x_forwarded_for | $ sent_http_content_range | $ request_time | $ host'' $ request_body | $ upstream_addr '; log_format forcdn '$ remote_addr | $ remote_user | [$ time_local] | $ request | $ status | $ response | ''$ http_referer | $ http_user_agent | $ response |'' $ http_cdn_src_ip | $ http_via '; # The default server # server {Listen 80; SERVER_NAME localhost; access_log off; Location/{root HTML; index index.html index.htm;} error_page 404/404 .html; # redirect server error pages to the static page/50x.html # error_page 500 502 503 504/50 x.html; location =/50x.html {root HTML;} include Conf. d /*. conf ;}

Virtual Host Configuration File

Conf. d/www.test1.com. conf

server {    listen 80;    server_name  www.test.com   #  www.test1.com     #charset utf-8;    access_log  /var/log/nginx/www.test.com.log combined;    location / {    proxy_set_header Host www.test.com;    proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;    proxy_pass http://test;#test1、test2    proxy_redirect http://www.test.com /;    }    #location ~ .*\.(gif|jpg|jpeg|png|bmp|ico|rar|css|js|zip|xml|txt|flv|swf|mid|doc|cur|xls|pdf|txt|mp3|wma)$ {    location ~ .*\.(css|js|gif|jpg|jpeg|png|bmp|ico|xml|txt|swf|doc|cur|xls)$ {    #proxy_set_header Host $http_host;    proxy_set_header Host    #www.test1.com  www.test2.com    proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;    proxy_set_header X-Real-IP $remote_addr;    proxy_set_header REMOTE-HOST $remote_addr;    proxy_pass_header  Set-Cookie;    proxy_pass http://test; #test1、test2     #add_header Cache-Control "max-age=604800";    add_header X-Cache "Cached by nginx - sinoicity-01";    proxy_cache_valid  200 304 7d;    proxy_cache test; #test1、test2    #proxy_cache_key $host$uri$is_args$args;    proxy_cache_key $host$uri$is_args;    expires 7d;    }}

Conf. d/upstream. conf

upstream test {    server 192.168.100.10:80;}upstream test1 {    server 192.168.100.20:80;}upstream test2 {    server 192.168.100.30:80;}


This article is from the blog of the "Bremen band" and will not be reposted!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.