Ngigx + Tomcat configure static/dynamic separation, Server Load balancer, and ngigxtomcat
Since the company used Ngnix, I felt a little curious about Nginx, So I studied it.
My version used in windows is nginx-1.8.1:
1. Start Ngnix
Double-click nginx-1.8.1. nginx.exe in the folder. If two nginx processes exist in the task manager, the startup is successful!
2. Common Ngnix commands
Forced nginx-s stop Shutdown
Nginx-s quit Security disabled
When nginx-s reload changes the configuration file, restart the nginx working process and the configuration file takes effect.
Nginx-s reopen open log file
3. Nginx Configuration
The following configuration combines the online information and writes it down to prevent you from forgetting it.
1 # users and groups used by Nginx 2 # user nobody; 3 # Number of sub-processes (usually equal to the number of CPUs or 2 times the number of CPUs) 4 worker_processes 1; 5 6 # error log storage path 7 # error_log logs/error. log; 8 # error_log logs/error. log notice; 9 # error_log logs/error. log info; 10 11 # specify the pid to store the file 12 # pid logs/nginx. pid; 13 14 15 events {16 # epoll recommended for linux using the network I/O model, and kqueue 17 # use epoll recommended for FreeBSD; 18 19 # use epoll to improve performance win without 20 # use epoll; 21 # maximum number of connections allowed 22 worker_connections 1024; 23} 24 25 26 http {27 # file type ing Table 28 include mime. types; 29 # default type 30 default_type application/octet-stream; 31 32 # define the log format 33 # log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request" '34 #' $ status $ body_bytes_sent "$ http_referer" '35 #' "$ http_user_agent" "$ http_x_forwarded_for "'; 36 37 # access_log logs/access. log main; 38 39 # Enable kernel replication mode, which should be enabled to achieve the fastest I/O efficiency 40 sendfile on; 41 # tcp_nopush On; 42 43 # keepalive_timeout 0; 44 # HTTP1.1 supports persistent connections alive 45 # reducing the alive time of each connection can increase the number of responsive connections to a certain extent, therefore, you can reduce the value by 46 keepalive_timeout 65; 47 48 # enable gzip compression to effectively reduce network traffic by 49 gzip on; 50 gzip_min_length by 1 k; # minimum 1 K 51 gzip_buffers 4 16 k; 52 gzip_http_version 1.0; 53 gzip_comp_level 2; 54 gzip_types text/plain application/x-javascripttext/css application/xml; 55 gzip_vary on; 56 57 # static File Cache 58 # maximum number of cached files, which are not in use 59 open_file_cache max = 655350 inactive = 20 s; 60 # interval between verifying the cache validity period 61 open_file_cache_valid 30 s; 62 # minimum number of files used during the validity period 63 open_file_cache_min_uses 2; 64 65 # xbq add 66 # upstream is used for load balancing. Here, configure the server address and port number to be polling. max_fails is the number of failed requests allowed. The default value is 1. 67 # weight is the round robin weight, which can be used to balance the server's failover rate based on different weights. 68 upstream hostname {69 server 127.0.0.1: 9000 max_fails = 0 weight = 2; 70 server 127.0.0.1: 9001 max_fails = 0 weight = 2; 71} 72 73 server {74 listen 8181; 75 server_name localhost; 77 # charset koi8-r; 78 # access_log logs/host. access. log main; 79 80 root/img; # create a new img folder in the nginx-1.8.1 folder to store static resources 81 82 location/{83 # root html; 84 # index index.html index.htm; 85 # xbq add 86 proxy_pass http: // hostn Ame; 87 # The following three commands allow redefinition and addition of some request header information that will be transferred to the proxy server 88 # Host information in the Request Header 89 proxy_set_header Host $ host; 90 # Real client IP 91 proxy_set_header X-Real-IP $ remote_addr; 92 # proxy routing information. Here, IP addresses have security risks 93 proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for; 94 # real User Access Protocol 95 proxy_set_header X-Forwarded-Proto $ scheme; 96 97 # default value: default, 98 # When the backend response 302, the location host in tomcat header is http: // 192.168.1.62: 8080 99 # Cause The request received by tomcat is sent by nginx. The request url host initiated by nginx is http: // 192.168.1.62: 8080100 # after it is set to default, nginx automatically replaces the location host part in the response header with the host part of the current user request 101 # Many online tutorials set this value to off, disabled replacement, 102 # After receiving 302, the user's browser jumps to http: // 192.168.1.62: 8080 and directly exposes the backend server to the browser 103 # Unless otherwise required, do not set the configuration 104 proxy_redirect default; 105 client_max_body_size 10 m; # the maximum number of single file bytes allowed by the client is 106 client_body_buffer_size 128 k; # maximum number of bytes requested by the buffer proxy to buffer the client: 107 proxy_connect_timeout 90; # Nginx and backend server connection timeout 108 proxy_read_timeout 90; # after successful connection, the backend server response time is 109 proxy_buffer_size 4 k; # Set Proxy Server (nginx) the buffer size for saving user header information is 110 proxy_buffers 6 32 k; # proxy_buffers buffer. If the average web page is below 32 k, set 111 proxy_busy_buffers_size 64 k; # High-load buffer size (proxy_buffers * 2) 112 proxy_temp_file_write_size 64 k; # Set the cache folder size, greater than this value, upload 113 114} 115 116 # xbq add117 # From the upstream server to configure Nginx Dynamic and Static separation. The defined static page is read directly from/usr/nginxStaticFile (Nginx release directory. 118 location ~ \. (Gif | jpg | jpeg | png | css | js | php) ${119 120 # expires defines the user's browser cache time as 7 days. If static pages are not updated frequently, you can set a longer value to save bandwidth and relieve the server pressure. E:/staticResource; 121 expires 7d; 122} 123 124 # xbq add125 # enable nginx status listening page 126 location/nginxstatus {127 stub_status on; 128 access_log on; 129} 130 131 # error_page 404/404 .html; 132 133 # redirect server error pages to the static page/50x.html 134 #135 error_page 500 502 503 x.html; 504/50 lo Cation =/50x.html {137 root html; 138} 139 140 # proxy the PHP scripts to Apache listening on 127.0.0.1: 80141 #142 # location ~ \. Php ${143 # proxy_pass http: // 127.0.0.1; 144 #} 145 146 # pass the PHP scripts to FastCGI server listening on 127.0.0.1: 9000147 #148 # location ~ \. Php ${149 # root html; 150 # fastcgi_pass 127.0.0.1: 9000; 151 # fastcgi_index index. php; 152 # fastcgi_param SCRIPT_FILENAME/scripts $ fastcgi_script_name; 153 # include fastcgi_params; 154 #} 155 156 # deny access. htaccess files, if Apache's document root157 # concurs with nginx's one158 #159 # location ~ /\. Ht {160 # deny all; 161 #} 162} 163 164 165 # another virtual host using mix of IP-, name -, and port-based configuration166 #167 # server {168 # listen 8000; 169 # listen somename: 8080; 170 # server_name somename alias another. alias; 171 172 # location/{173 # root html; 174 # index index.html index.htm; 175 #} 176 #} 177 178 179 # HTTPS server180 #181 # server {182 # listen 443 ssl; 183 # server_name lo Calhost; 184 185 # ssl_certificate cert. pem; 186 # ssl_certificate_key cert. key; 187 188 # ssl_session_cache shared: SSL: 1 m; 189 # ssl_session_timeout 5 m; 190 191 # ssl_ciphers HIGH :! ANULL :! MD5; 192 # ssl_prefer_server_ciphers on; 193 194 # location/{195 # root html; 196 # index index.html index.htm; 197 #} 198 #} 199 200}