Windows install nginx-1.10.1 reverse proxy access IIS Site instance steps, nginx-1.10.1iis

Source: Internet
Author: User
Tags sendfile

Windows install nginx-1.10.1 reverse proxy access IIS Site instance steps, nginx-1.10.1iis

First download the software package from the official website and decompress the package. The path should not contain Chinese characters.

Nginx configuration path

In Windows, the file path can be "\", "\", or "/" as the path separator. However, "\" is the easiest way to cause problems, so avoid using it whenever possible.

Do not add PATH. Otherwise, an error is thrown. the config file PATH cannot be found.

For example, if I decompress the package to an edisk

The logging command is positioned in the folder cd E: \ WorkSoftWare \ nginx-1.10.1 where nginx.exe is located

Then run the command to ensure that the nginx. conf file configuration is correct.

In fact, the most important and important work of nginx is the configuration file. There is nothing else that requires the attention of our application developers unless you want to modify the underlying source code.

The nginx. conf configuration is as follows:

# User nobody; worker_processes 1; # Number of worker processes, you can configure multiple # global error logs and PID file error_log/WorkSoftWare/nginx-1.10.1/logs/error. log; # error_log logs/error. log notice; # error_log logs/error. log info; pid/WorkSoftWare/nginx-1.10.1/logs/nginx. pid; events {worker_connections 1024; # maximum number of connections to a single process (maximum number of connections = number of connections * Number of processes)} # Set the http server, using its reverse proxy function, Server Load balancer supports http {include mime. types; # Set the configuration file location. The conf here refers to nginx. the directory where the conf file is located. You can also use the absolute path to specify the configuration in other places. File default_type application/octet-stream; # default type-octal file stream # Set the log format # log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request"' # '$ status $ body_bytes_sent "$ http_referer "'#'" $ http_user_agent "" $ http_x_forwarded_for "'; # Set access log # access_log/WorkSoftWare/nginx-1.10.1/logs/access. log main; sendfile on; # Whether to activate the sendfile () function, which is more efficient than the default mode # tcp_nopush on; # compress the HTTP response header to a package for sending, used only when sendfile is enabled # connection timeout Time # keepalive_timeout 0; keepalive_timeout 65; gzip on; # enable Gzip compression # server cluster # Set the Server list of Server Load balancer to support multiple groups of Server Load balancer instances, you can configure multiple upstreams to serve different servers. # nginx upstream supports several methods of allocation #1) Round Robin (default) each request is allocated to different backend servers one by one in chronological order. If the backend server is down, can be removed automatically. #2) weight specifies the polling probability. weight is directly proportional to the access ratio, which is used when the backend server performance is uneven. As shown above, the weight is specified. #3) ip_hash: each request is allocated according to the hash result of the access ip address. In this way, each visitor accesses a backend server in a fixed manner and can solve the session problem. #4), fair #5), url_hash # Urlhash # upstream imicrosoft.net # {# server cluster name # server configuration weight indicates the weight. The higher the weight, the higher the probability of allocation. # Server 192.98.12.60: 1985 weight = 3 max_fails = 2 fail_timeout = 30 s; # server 192.98.12.42: 8086 weight = 3 max_fails = 2 fail_timeout = 30 s; # The weigth parameter indicates the weight value, the higher the weight, the higher the probability of being allocated #1. down indicates that the server before the order is not involved in the load #2. weight is 1 by default. the larger the weight, the larger the load weight. #3. backup: Requests the backup machine when all other non-backup machines are down or busy. Therefore, this machine is under the least pressure. # In this example, multiple servers can change their ip addresses on the same server. # server 127.0.0.1: 8055 weight = 4 down; # server 127.0.0.1: 8010 weight = 5 backup; #} upstream localhost {server 127.0.0.1: 9000 weight = 3 max_fails = 2 fail_timeout = 200 s; server 127.0.0.1: 8086 weight = 5 max_fails = 2 fail_timeout = 200 s ;} # The current Nginx configuration and proxy server address, that is, the server address, listener port, and default address installed in Nginx. # Set the virtual host to listen to port 80 server {listen 9090; # Listening to port 9090 # For server_name, you can configure multiple server _ Name to meet the requirements of server_name localhost; # The current service domain charset utf8; # charset koi8-r; # Set the access log of the virtual host # access_log logs/host. access. log main; # If you access/images/*,/js/*,/css/* resources, you can directly retrieve local files without forwarding them. # However, if there are many files, the effect is not very good. # Location ~. *\. (Jpg | jpeg | gif | css | png | ico | html) $ # {# expires 30d; # root/nginx-1.10.1; # root: # break; #}# enable Server Load balancer for "/" location/{root html; # index index.html index.htm index in the html subdirectory of the nginx installation directory by default. aspx; # listing files and subdirectories # proxy_pass https://www.imicrosoft.net when there is no index page; # autoindex on corresponding to upstream of the load balancing server; # When there is no index page, list Files and subdirectories # retain the Real user information proxy_redirect off; # Do not jump to proxy_set_header Host $ host; proxy_set_header X-Real-IP $ Remote_addr; proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for; # initiate a handshake with the timeout value of the backend server and wait for the response timeout value # proxy_connect_timeout 12; # after the connection is successful, wait for the response time of the backend server to enter the backend queue for processing # proxy_read_timeout 90; # proxy request cache this cache interval will save the user's header information for Nginx for rule processing. Generally, you only need to save the following header information # proxy_send_timeout 90; # Tell Nginx to save the maximum space of several buffers used for a single use # proxy_buffer_size 4 k; # Proxy_buffers 4 32 k; # if the system is busy, you can apply for official proxy_buffers recommendations for major domestic companies * 2 # proxy_busy_buffers_size 64 k; # proxy Cache temporary file size limit 64 k; # proxy_next_upstream error timeout invalid_header http_500 http_503 http_404; Limit 128 m; # Start proxy proxy_pass https: // localhost; client_max_body_size 10 m; # maximum number of bytes per file allowed for client requests} # Example 1 # location/{# proxy_pass https://imicrosoft.net ;# # Proxy_redirect default; ## proxy_set_header Host $ host; # proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for; #}# Example 2 # location/tileservice {# proxy_pass https: // cluster/inclutileservice/tileService; # proxy_set_header Host $ host; # proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for; #}# error_page 404/404 .html; # redirect server error pages to the static page/50x.html # Error_page 500 502 503 x.html; location =/50x.html {root html;} # proxy the PHP scripts to Apache listening on 127.0.0.1: 80 # pair "/XXXXX. PHP "enable Server Load balancer # location ~ \. Php $ {# proxy_pass https: // 127.0.0.1; #}# location/baidu # {# proxy_pass https://www.google.com; # proxy_set_header Host $ host; # proxy_set_header X-Forwarded-For $ timeout; #}# pass the PHP scripts to FastCGI server listening on 127.0.0.1: 9000 ## location ~ \. Php $ {# root html; # fastcgi_pass 127.0.0.1: 9000; # fastcgi_index index. php; # fastcgi_param SCRIPT_FILENAME/scripts $ fastcgi_script_name; # include fastcgi_params; #}# deny access. htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\. Ht {# deny all ;#}# another virtual host using mix of IP-, name-, and port-based configuration # server {# listen 8000; # listen somename: 8080; # server_name somename alias another. alias; # location/{# root html; # index index.html index.htm; #}#}# HTTPS server # server {# listen 443 ssl; # server_name localhost; # ssl_certificate cert. pem; # ssl_certificate_key cert. key; # ssl_se Ssion_cache shared: SSL: 1 m; # ssl_session_timeout 5 m; # ssl_ciphers HIGH :! ANULL :! MD5; # ssl_prefer_server_ciphers on; # location/{# root html; # index index.html index.htm ;#}#}}

Result:

IIS Site

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.