Because the company used Ngnix, for just contact Nginx, feel a little curious, so studied the next.
The version I use under Windows is nginx-1.8.1:
1. Start Ngnix
Double-click Nginx.exe in the nginx-1.8.1 folder to enable success when there are two Nginx processes in Task Manager!
2. Ngnix Common Commands
- Nginx-s stop forced shutdown
- Nginx-s quit safe shutdown
- Nginx-s Reload Change the configuration file, restart the Nginx worker process, when the configuration file comes into effect
- Nginx-s Reopen open log file
3. Nginx Configuration
The following configuration synthesizes the information on the net, jot down, prevent oneself to forget.
#Nginx所用用户和组 #user Nobody;
#工作的子进程数量 (usually equal to the number of CPUs or twice times the CPU) Worker_processes 1;
#错误日志存放路径 #error_log Logs/error.log;
#error_log Logs/error.log Notice;
#error_log Logs/error.log Info;
#指定pid存放文件 #pid Logs/nginx.pid;
Events {#使用网络IO模型linux建议epoll, FreeBSD recommended the use of Kqueue #use epoll;
#使用epoll模型提高性能 win does not need #use epoll;
#允许最大连接数 worker_connections 1024;
HTTP {#扩展名与文件类型映射表 include mime.types;
#默认类型 Default_type Application/octet-stream; #定义日志格式 #log_format main ' $remote _addr-$remote _user [$time _local] "$request" ' # ' $status $body _bytes_sent '
$http _referer "' # '" $http _user_agent "" $http _x_forwarded_for ";
#access_log Logs/access.log Main;
# Enable kernel Copy mode, should remain open to achieve the fastest IO efficiency sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
# HTTP1.1 supports persistent connections Alive # reduces the alive time per connection to a certain extent to increase the number of responsive connections, so it is generally appropriate to reduce this value keepalive_timeout 65;
# Start gzip compression feature settings, effectively reduce network traffic gzip on; Gzip_min_length 1k;
#最小1K gzip_buffers 4 16k;
Gzip_http_version 1.0; Gzip_comp_level 2;
Gzip_types Text/plain application/x-javascripttext/css Application/xml;
Gzip_vary on;
# static file cache # Maximum cache number, file not using lifetime Open_file_cache max=655350 inactive=20s;
# Verify cache Expiration time interval open_file_cache_valid 30s;
# The minimum number of times to use the document Open_file_cache_min_uses 2;
#xbq add #upstream作负载均衡, where you need to poll the server address and port number, max_fails the number of times to allow a request to fail, and the default is 1.
#weight为轮询权重, depending on the weight allocation can be used to balance the server's access rate.
Upstream hostname {server 127.0.0.1:9000 max_fails=0 weight=2;
Server 127.0.0.1:9001 max_fails=0 weight=2;
} server {Listen 8181;
server_name localhost;
#charset Koi8-r;
#access_log Logs/host.access.log Main; root/img;
Create a new IMG folder in #在nginx -1.8.1 folder for static resource location/{#root html;
#index index.html index.htm;
#xbq add Proxy_pass http://hostname;
#下面三条指令允许重新定义和添加一些将被转移到被代理服务器的请求头部信息 # The host information Proxy_set_header host $host in the request header;
# Real Client IP proxy_set_header x-real-ip $remote _addr; # Agent routing information, here IP has AnnAll hidden danger proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
# Real User Access Protocol Proxy_set_header X-forwarded-proto $scheme; # Default value defaults, # back-end Response 302 the host of the location in the Tomcat header is http://192.168.1.62:8080 # because the request that Tomcat received is nginx sent over , Nginx initiates the request URL host is http://192.168.1.62:8080 # is set to default, Nginx automatically replaces the location host part of the response header with the host part of the current user request # Online a lot The tutorial sets this value to OFF, disables replacements, # so that the user's browser receives 302 and jumps to http://192.168.1.62:8080, exposing the backend server directly to the browser # so unless special needs, do not set this superfluous configuration PR
Oxy_redirect default; Client_max_body_size 10m; #允许客户端请求的最大单文件字节数 client_body_buffer_size 128k; #缓冲区代理缓冲用户端请求的最大字节数 Proxy_connect_timeout 90; #nginx跟后端服务器连接超时时间 Proxy_read_timeout 90; #连接成功后, back-end server response time Proxy_buffer_size 4k; #设置代理服务器 (Nginx) to save the buffer size of user header information Proxy_buffers 6 32k; #proxy_buffers缓冲区, the average web page below 32k, so set proxy_busy_buffers_size 64k; #高负荷下缓冲大小 (proxy_buffers*2) Proxy_temp_file_wri Te_size 64k; #设定缓存文件夹大小, greater than this value, will be served from the upstream#xbq add #配置Nginx动静分离, static pages defined are read directly from/usr/nginxstaticfile (Nginx publishing directory). Location ~\. (gif|jpg|jpeg|png|css|js|php) $ {#expires定义用户浏览器缓存的时间为7天, if static pages are infrequently updated, they can be set longer, which saves bandwidth and relieves server pressure E:/staticresour
Ce
Expires 7d;
#xbq Add #启用nginx Status listening page location/nginxstatus {stub_status on;
Access_log on;
} #error_page 404/404.html;
# REDIRECT Server error pages to the static page/50x.html # Error_page 502 503 504/50x.html;
Location =/50x.html {root html; # Proxy The PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ {# Proxy_pass http:
127.0.0.1; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ {# root
html
# Fastcgi_pass 127.0.0.1:9000;
# Fastcgi_index index.php;
# Fastcgi_param Script_filename/scripts$fastcgi_script_name; # inclUde Fastcgi_params; #} # Deny access to. htaccess files, if the Apache ' s document Root # concurs with Nginx ' s one # #location ~/
\.ht {# deny all; #}} # Another virtual host using mix of ip-, name-, and port-based configuration # #server {# Listen 800
0;
# Listen somename:8080;
# server_name somename alias Another.alias;
# location/{# root HTML;
# index index.html index.htm;
#} # HTTPS Server # #server {# listen 443 SSL;
# server_name localhost;
# ssl_certificate Cert.pem;
# Ssl_certificate_key Cert.key;
# Ssl_session_cache shared:ssl:1m;
# ssl_session_timeout 5m; # ssl_ciphers high:!anull:!
MD5;
# ssl_prefer_server_ciphers on;
# location/{# root HTML;
# index index.html index.htm;
# }
#}
}
The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.