It turns out that nginx can only be set up in Linux, and only recently some materials have been found to suddenly realize that nginx can be run in windows. When your website traffic is getting higher and higher, there is no way for a server to withstand the traffic pressure. What can be done? Then add several servers for load. However, the current hardware and facilities are too expensive, such as F5. In this case, free nginx becomes a good choice, nginx is currently used as an HTTP server for many portal websites and websites with high traffic volumes, so nginx is quite good ......
Lab environment: (2 servers)
First:
System: win2003
Nginx: nginx/Windows-0.8.32
IP: 192.168.0.51
Environment: Local
Second:
System: win2003
IP: 192.168.0.52
Environment: Remote
Note:
In this test, the software nginx is placed locally (192.168.0.51), that is, the server bound to the domain name. IIS on this server cannot use port 80, because the nginx software needs to use port 80. (For convenience, I added the local hosts file to the domain name I want to test 192.168.0.51 www.g.cn)
The nginx download address is as follows:
Nginx download: http://nginx.net/
Download and decompress the package to c: \, and change the directory name to nginx.
Everything is ready. Start the experiment:
No.1:
Create a website on the local (192.168.0.51) server IIS with the port 808, for example:
IIS website binding settings
No. 2:
Create a website on IIS at 192.168.0.52 remotely, and use port 80, for example:
No. 3:
Now, the IIS of the two servers has been set up. Configure the nginx software to achieve website load balancing. Open the following file:
C: \ nginx \ conf \ nginx. conf
1. Find Content Server {
Add the following content to it:
Upstream www.g.cn {
Server 192.168.0.51: 808;
Server 192.168.0.52: 80;
}
(This is the IP address of the server website used for Load Switching)
2. Locate location /{
Root HTML;
Index index.html index.htm;
}
Change the content as follows:
Location /{
Proxy_pass http://www.g.cn;
Proxy_redirect default;
}
3. Find server {
Listen 80;
SERVER_NAME localhost;
Change the content to the following:
Server {
Listen 80;
SERVER_NAME 192.168.0.51;
(This is a request to listen to the access domain name bound to port 80 of the server)
Well, the configuration is as simple as this. The following figure shows the configuration of the above three steps:
Load configuration Diagram
No. 4:
All configured. Start the nginx software below.
Enter the command prompt cmd, enter c: \ nginx>, and enter the nginx command, such:
Start nginx
At this time, there are two nginx.exe processes in the system process, such:
System nginx Process
(Stop nginx and run nginx-s stop)
No. 5:
After the above configuration, let's look at the load effect:
On the local server (192.168.0.51), open IE and enter http: // 192.168.0.51.
Result of the first website opening:
First visit website Diagram
Refresh the page and the result is as follows:
View the website again
OK. The test is successful.
After this test, we can see that the load balancing of the website is not difficult. There is no need to purchase additional hardware. In addition, the performance of nginx in Linux is better than that in windows, so you can run nginx in Linux and put the website developed by. Net to IIS on Windows server.
If the website traffic is very large, you can use a dedicated server to run nginx and other servers to run website programs (the programs on several servers are the same), so that the load will not be too high, if it doesn't work any more, make some sections of the website a Level 2 domain name, and the Level 2 domain name also performs load, so OK.
Nginx Load Balancing session pasting Based on iPhone Ash
Web Server 18:07:43 read 30 comments 0 font size: large and small subscriptions
Nginx performs Load Balancing Based on the Client IP address and sets ip_hash in upstream to select the same backend server for clients in the same class C address segment, the backend server will be replaced only when it is down.
Currently, nginx upstream supports five allocation methods.
1. Round Robin (default)
Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.
Upstream backserver {
Server 192.168.0.14;
Server 192.168.0.15;
}
2. Weight
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
Upstream backserver {
Server 192.168.0.14 Weight = 10;
Server 192.168.0.15 Weight = 10;
}
3. ip_hash
Each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
Upstream backserver {
Ip_hash;
Server 192.168.0.14: 88;
Server 192.168.0.15: 80;
}
4. Fair (third party)
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.
Upstream backserver {
Server server1;
Server server2;
Fair;
}
5. url_hash (third-party)
Requests are allocated based on the hash result of the access URL so that each URL is directed to the same backend server. The backend server is effective when it is cached.
Upstream backserver {
Server squid1: 3128;
Server squid2: 3128;
Hash $ request_uri;
Hash_method CRC32;
}
Add
Proxy_pass http: // backserver /;
Upstream backserver {
Ip_hash;
Server 127.0.0.1: 9090 down; (Down indicates that the server before the order is not involved in the load for the time being)
Server 127.0.0.1: 8080 Weight = 2; (the default value of weight is 1. The larger the weight value, the higher the load weight)
Server 127.0.0.1: 6060;
Server 127.0.0.1: 7070 backup; (requests to backup machines when all other non-Backup machines are down or busy)
}
Max_fails: the default number of failed requests is 1. If the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned.
Fail_timeout: The pause time after max_fails fails.
Generally, Server Load balancer requires session sharing between multiple backend Web servers. Otherwise, the user may log on to the server.
When I read the nginx documentation today, I found that nginx can perform Load Balancing Based on the Client IP address, and set ip_hash in upstream to select the same backend server for clients in the same class C address segment, the backend server will be replaced only when it is down.
The original article is as follows:
The key for the hash is the class-C network address of the client. this method guarantees that the client request will always be forwarded to the same server. but if this server is considered inoperative, then the request of this client will be transferred to another server. this gives a high probability clients will always connect to the same server.
That is to say, I can run two Forums on two servers, but share a backend database without worrying about session sharing. ip_hash is enabled before. Normally, the client obtains the IP address from the Internet, and logs and posts are forwarded to a fixed backend server. It should be said that the wider the access IP address distribution, the more load balancing will be on average. Hey, if all the users from the same C, it will be useless.
An environment was set up on the Virtual Machine for testing:
Three nginx ports are installed on ports 80, 81, and 82 respectively.
Nginx on port 80 serves as the front-end of load balancing and is configured with the following two nginx:
Upstream test {
Ip_hash;
Server 127.0.0.1: 81;
Server 127.0.0.1: 82;
}
Write a simple HTML file on nginx on port 81 With the content 1; write an HTML file with the content 2 on nginx on port 82 with the same name.
When ip_hash is available, refresh the http: // 192.168.1.33/index.html page and always display it as 1. If ip_hash is not available, the values 1 and 2 are in turn.