How to configure Nginx + Windows Server Load balancer

Source: Internet
Author: User
Tags nginx server what is nginx nginx load balancing

1. Download Nginx
Http://nginx.org/download/nginx-1.2.5.zip
Decompress the package to the C: \ nginx directory.
2. Create a website on the two servers:
S1: 192.168.16.35: 8054
S2: 192.168.16.16: 8089
2. Find the Directory
C: \ nginx \ conf \ nginx. conf
Open nginx. conf
The configuration is as follows:

Copy codeThe Code is as follows: # users and groups used, not specified in window
# User nobody;
# Specify the number of work derivative processes (generally equal to the total number of CPUs or twice the total number, for example, two quad-core CPUs, the total number is 8)
Worker_processes 1;
# Specify the path for storing the error log file. The error log level can be set to debug | info | notice | warn | error | crit]
# Error_log logs/error. log;
# Error_log logs/error. log notice;
Error_log logs/error. log info;
# Specify the pid storage path
# Pid logs/nginx. pid;

# Working mode and maximum number of connections
Events {
# Network I/O models are used. epoll models are recommended for Linux systems, and kqueue is recommended for FreeBSD systems.
# Use epoll;
# Allowed connections
Worker_connections 1024;
}

# Set the http server and use its reverse proxy function to provide Load Balancing support
Http {
# Set the mime type
Include mime. types;
Default_type application/octet-stream;
# Set the log format
# Log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request "'
# '$ Status $ body_bytes_sent "$ http_referer "'
# '"$ Http_user_agent" "$ http_x_forwarded_for "';

# Access_log logs/access. log main;
Log_format main '$ remote_addr-$ remote_user [$ time_local]'
'"$ Request" $ status $ bytes_sent'
'"$ Http_referer" "$ http_user_agent" "$ http_x_forwarded_for "'
'"$ Gzip_ratio "';
Log_format download '$ remote_addr-$ remote_user [$ time_local]'
'"$ Request" $ status $ bytes_sent'
'"$ Http_referer" "$ http_user_agent "'
'"$ Http_range" "$ sent_http_content_range "';

# Set Request Buffer
Client_header_buffer_size 1 k;
Large_client_header_buffers 4 4 k;

# Setting access log
Access_log logs/access. log main;
Client_header_timeout 3 m;
Client_body_timeout 3 m;
Send_timeout 3 m;

Sendfile on;
Tcp_nopush on;
Tcp_nodelay on;
# Keepalive_timeout 0;
Keepalive_timeout 65;

# Enable the gzip Module
Gzip on;
Gzip_min_length 1100;
Gzip_buffers 4 8 k;
Gzip_types text/plain application/x-javascript text/css application/xml;

Output_buffers 1 32 k;
Post pone_output 1460;

Server_names_hash_bucket_size 128;
Client_max_body_size 8 m;

Fastcgi_connect_timeout 300;
Fastcgi_send_timeout 300;
Fastcgi_read_timeout 300;
Fastcgi_buffer_size 64 k;
Fastcgi_buffers 4 64 k;
Fastcgi_busy_buffers_size 128 k;
Fastcgi_temp_file_write_size 128 k;
Gzip_http_version 1.1;
Gzip_comp_level 2;
Gzip_vary on;

# Set the Server list of Server Load balancer
Upstream localhost {
# Allocate the backend tomcat servers to the requests based on ip computing to solve the session problem.
Ip_hash;
# If the same machine is on multiple networks, the routing and ip address may be different
# The weigth parameter indicates the weight. A higher weight indicates a higher probability of being assigned.
# Server localhost: 8080 weight = 1;
# Server localhost: 9080 weight = 1;
Server 192.168.16.35: 8054 max_fails = 2 fail_timeout = 600 s;
Server 192.168.16.16: 8089 max_fails = 2 fail_timeout = 600 s;
}

# Set Virtual Hosts
Server {
Listen 80;
Server_name 192.168.16.16;

# Charset koi8-r;
Charset UTF-8;
# Set access logs for the current virtual host
Access_log logs/host. access. log main;
# If you access/img/*,/js/*,/css/* resources, you can directly obtain the local document without passing squid
# If there are many documents, this method is not recommended because the squid cache has better results.
# Location ~ ^/(Img | js | css )/{
# Root/data3/Html;
# Expires 24 h;
#}
# Enable Server Load balancer "/"
Location /{
Root html;
Index index.html index.htm index. aspx;

Proxy_redirect off;
# Retaining real user information
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
# Maximum number of bytes allowed for client requests per file
Client_max_body_size 10 m;
# The maximum number of bytes for the buffer proxy to buffer client requests. It can be understood that the request is saved locally and then transmitted to the user.
Client_body_buffer_size 128 k;
# Initiate a handshake with the timeout value of the backend server and wait for the response timeout value
Proxy_connect_timeout 12;
# After the connection is successful, wait for the response time of the backend server to enter the backend queue for processing.
Proxy_read_timeout 90;
# The backend server data return time means that all data must be transmitted by the backend server within the specified time.
Proxy_send_timeout 90;
# Proxy request cache this cache interval will save the user's header information for a total of Nginx rules for processing, generally as long as you can save the next header information
Proxy_buffer_size 4 k;
# Tell Nginx to save the maximum usage of several buffers for a single use.
Proxy_buffers 4 32 k;
# If the system is busy, you can apply for official proxy_buffers recommendations from major Chinese companies * 2
Proxy_busy_buffers_size 64 k;
# Proxy Cache temporary file size
Proxy_temp_file_write_size 64 k;
Proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
Proxy_max_temp_file_size 128 m;

Proxy_pass http: // localhost;
}

# Error_page 404/404 .html;

# Redirect server error pages to the static page/50x.html
#
Error_page 500 502 503 x.html;
Location =/50x.html {
Root html;
}

# Proxy the PHP scripts to Apache listening on 127.0.0.1: 80
#
# Location ~ \. Php $ {
# Proxy_pass http: // 127.0.0.1;
#}

# Pass the PHP scripts to FastCGI server listening on Fig: 9000
#
# Location ~ \. Php $ {
# Root html;
# Fastcgi_pass 127.0.0.1: 9000;
# Fastcgi_index index. php;
# Fastcgi_param SCRIPT_FILENAME/scripts $ fastcgi_script_name;
# Include fastcgi_params;
#}

# Deny access to. htaccess files, if Apache's document root
# Concurs with nginx's one
#
# Location ~ /\. Ht {
# Deny all;
#}
}

# Another virtual host using mix of IP-, name-, and port-based configuration
#
# Server {
# Listen 8000;
# Listen somename: 8080;
# Server_name somename alias another. alias;

# Location /{
# Root html;
# Index index.html index.htm;
#}
#}

# HTTPS server
#
# Server {
# Listen 443;
# Server_name localhost;

# Ssl on;
# Ssl_certificate cert. pem;
# Ssl_certificate_key cert. key;

# Ssl_session_timeout 5 m;

# Ssl_protocols SSLv2 SSLv3 TLSv1;
# Ssl_ciphers HIGH :! ANULL :! MD5;
# Ssl_prefer_server_ciphers on;

# Location /{
# Root html;
# Index index.html index.htm;
#}
#}

}

4. Double-click the C: \ nginx \ nginx.exe file to start nginx.
5. Open a browser:
Enter http: // 192.168.16.16 for access
Test: Disable the website on S1, refresh the browser access, disable the website on S2, open the website on S1, and refresh the browser access.

Core code 1: Add in http {}Copy codeThe Code is as follows: # Set the Server list of Server Load balancer
Upstream localhost {
# Allocate the backend tomcat servers to the requests based on ip computing to solve the session problem.
Ip_hash;
# If the same machine is on multiple networks, the routing and ip address may be different
# The weigth parameter indicates the weight. A higher weight indicates a higher probability of being assigned.
# Server localhost: 8080 weight = 1;
# Server localhost: 9080 weight = 1;
Server 192.168.1.98: 8081 max_fails = 2 fail_timeout = 600 s;
Server 192.168.1.98: 8082 max_fails = 2 fail_timeout = 600 s;

Core code 2: Add in server {}Copy codeThe Code is as follows: # enable Server Load balancer "/"
Location /{
Root html;
Index index.html index.htm index. aspx;

Proxy_redirect off;
# Retaining real user information
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
# Maximum number of bytes allowed for client requests per file
Client_max_body_size 10 m;
# The maximum number of bytes for the buffer proxy to buffer client requests. It can be understood that the request is saved locally and then transmitted to the user.
Client_body_buffer_size 128 k;
# Initiate a handshake with the timeout value of the backend server and wait for the response timeout value
Proxy_connect_timeout 12;
# After the connection is successful, wait for the response time of the backend server to enter the backend queue for processing.
Proxy_read_timeout 90;
# The backend server data return time means that all data must be transmitted by the backend server within the specified time.
Proxy_send_timeout 90;
# Proxy request cache this cache interval will save the user's header information for a total of Nginx rules for processing, generally as long as you can save the next header information
Proxy_buffer_size 4 k;
# Tell Nginx to save the maximum usage of several buffers for a single use.
Proxy_buffers 4 32 k;
# If the system is busy, you can apply for official proxy_buffers recommendations from major Chinese companies * 2
Proxy_busy_buffers_size 64 k;
# Proxy Cache temporary file size
Proxy_temp_file_write_size 64 k;
Proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;
Proxy_max_temp_file_size 128 m;
Proxy_pass http: // localhost;
}

The following are some complementary tools:

Nginx Server Load balancer is an amazing technology that many people cannot master. Here we will introduce the problems related to Nginx Server Load balancer in detail. I tried Nginx Load Balancing today! What is Nginx?

Nginx ("engine x") is a high-performance HTTP and reverse proxy server and an IMAP/POP3/SMTP proxy server. Nginx was developed by the Rambler.ru site with the highest access volume in Russia as Igor Sysoev. It has been running on this site for more than two and a half years. Igor publishes source code in the form of a class BSD license. Despite being a beta version, Nginx is well known for its stability, rich feature sets, sample configuration files, and low system resource consumption.

First, the configuration is very simple and powerful. It's time to see each other. Let's take a look at how to write the configuration file.

Copy codeThe Code is as follows: worker_processes 1;
Events {
Worker_connections 1024;
}
Http {
Upstream myproject {
# Multiple source servers, ip: Port, and port 80 can be written or not
Server 192.168.43.158: 80;
Server 192.168.41.167;
}
Server {
Listen 8080;
Location /{
Proxy_pass http: // myproject;
}
}
}

What are the functions of Nginx Server Load balancer?

If one of the backend servers is broken, it can be automatically identified. Even better, Nginx can immediately identify servers A and B. If the response time of A is 3, if the response time of B is 1, Nginx automatically adjusts the probability of access to B to three times that of A. The installation is complete after Nginx load balancing is completed. I reported an error during make, saying that the HTTP Rewrite module is faulty.

./Configure-without-http_rewrite_module
Then make and make install.

After installation, create a new configuration file and copy the content of the above configuration file. Of course, modify your IP address, save it as load_balance.conf, and start it:

/Usr/local/Nginx/sbin/Nginx-c load_balence.conf

Since Nginx is written by Russians, the English documents are not so complete. For me, the biggest advantage of Nginx is its simple configuration and powerful functions. I used to configure apache-jk, which is really not suitable for normal users. It is too complicated and can only be used for tomcat Nginx load balancing.

Nginx does not have this restriction. For it, What server is next is completely transparent. Nginx is a bit uncomfortable. It cannot run in windows at present. I wrote a lot, haha .~~ If you say something wrong

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.