: This article describes how to configure an Nginx server load balancer instance. if you are interested in the PHP Tutorial, refer to it. [Introduction] server load balancer is a task for a large-traffic website. next I will introduce how to configure server load balancer on the Nginx server. I hope it will be helpful to anyone who needs it. Server load balancer is a simple introduction to what Server Load Balancer means.
Server load balancer is one of the things we need to do for a large-traffic website. next I will introduce how to configure server load balancer on the Nginx server. I hope it will be helpful to anyone who needs it.
Server load balancer
First, let's take a brief look at what server load balancer is. simply understanding what it means literally can explain that N servers are equally loaded, A server is not idle because of its high load downtime. The premise of server load balancer is that multiple servers are required to achieve load balancing, that is, two or more servers.
Test Environment
Because there is no server, this test directly specifies the host domain name, and then installs three CentOS in VMware.
Test domain name: a.com
Server a ip address: 192.168.5.149 (master)
Server B IP address: 192.168.5.27
C server IP address: 192.168.5.126
Deployment ideas
Server A serves as the master server, and the domain name is directly resolved to server A (192.168.5.149). server A is responsible for server B (192.168.5.27) and server C (192.168.5.126.
Domain name resolution
Because it is not a real environment, the domain name can use a.com for testing. Therefore, the resolution of a.com can only be set in the hosts file.
Open: C: WindowsSystem32driversetchosts
Add at the end
192.168.5.149 a.com
Save and exit, and then start command mode ping to see if the setting is successful
From the above, a.com is successfully resolved to 192.168.5.149IP.
Nginx. conf settings of server
Open nginx. conf. the file is located in the conf Directory of the nginx installation directory.
Add the following code to the http segment:
Upstream a.com {
Server 192.168.5.126: 80;
Server 192.168.5.27: 80;
}
Server {
Listen 80;
Server_name a.com;
Location /{
Proxy_pass http://a.com;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
}
}
Save and restart nginx
B. Configure nginx. conf on the C server
Open nginx. confi and add the following code to the http segment:
Server {
Listen 80;
Server_name a.com;
Index index.html;
Root/data0/htdocs/www;
}
Save and restart nginx
Test
At the.comtime, I wrote a different index.html file under the B 、c server to differentiate the server handler.
Open the browser to access a.com and refresh the results. all requests are allocated to server B (192.168.5.27) and server C (192.168.5.126) by the master server (192.168.5.149, load balancing is achieved.
Server B processing page
Server C processing page
What if one of the servers goes down?
When a server goes down, will access be affected?
Let's take a look at the instance. based on the above example, assume that the host of the C server 192.168.5.126 is down (because it cannot simulate the crash, I will shut down the C server) and then visit it.
Access results:
We found that although server C (192.168.5.126) is down, website access is not affected. In this way, the whole site will not be dragged down due to the downtime of a server in server load balancer mode.
What should I do if I want to configure server load balancer for B .com?
It's easy, just like setting a.com. As follows:
Assume that the master server IP address of B .com is 192.168.5.149, and the server load balancer is distributed to machines 192.168.5.150 and 192.168.5.151.
Resolve the domain name B .com to 192.168.5.149IP.
Add the following code to nginx. conf of the master server (192.168.5.149:
Upstream B .com {
Server 192.168.5.150: 80;
Server 192.168.5.151: 80;
}
Server {
Listen 80;
Server_name B .com;
Location /{
Proxy_pass http:// B .com;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
}
}
Save and restart nginx
Set nginx on machines 192.168.5.150 and 192.168.5.151, enable nginx. conf, and add the following code at the end:
Server {
Listen 80;
Server_name B .com;
Index index.html;
Root/data0/htdocs/www;
}
Save and restart nginx
After completing the subsequent steps, you can configure B .com server load balancer.
Does the master server provide services?
In the preceding example, the server load balancer of the master server is applied to other servers. Can the master server itself be added to the server list, in this way, a server is not wasted as a forwarding function, but also involved in the provision of services.
For example, in the above case, three servers:
Server a ip address: 192.168.5.149 (master)
Server B IP address: 192.168.5.27
C server IP address: 192.168.5.126
We resolve the domain name to server A, and then forward the domain name to server B and server C by server A. then Server A only provides A forwarding function. now we have Server A provide site services.
Let's analyze it first. if you add the master server to upstream, there may be two situations:
1. the master server is forwarded to other IP addresses, and other IP addresses are processed properly;
2. the master server forwards the IP address to the master server and then allocates the IP address to the master server. if the IP address is always allocated to the local server, an endless loop will occur.
How can this problem be solved? Because port 80 is already used to listen for server load balancer processing, the server cannot use port 80 to process a.com access requests. a new one is required. Therefore, we add the nginx. conf of the master server to the following code:
Server {
Listen 8080;
Server_name a.com;
Index index.html;
Root/data0/htdocs/www;
}
Restart nginx and enter a.com: 8080 in the browser to see if it can be accessed. The result can be accessed normally.
Since normal access is available, we can add the master server to upstream, but the port should be changed as follows:
Upstream a.com {
Server 192.168.5.126: 80;
Server 192.168.5.27: 80;
Server 127.0.0.1: 8080;
}
You can add the master server IP192.168.5.149 or 127.0.0.1 to access your own IP address.
Restart Nginx, and then visit a.com to see if it will be allocated to the master server.
The master server can also be added to the service normally.
Last
I. server load balancer is not unique to nginx. it is also known as apache, but its performance may be inferior to nginx.
2. multiple servers provide services, but the domain name is resolved only to the master server, and the real server IP address is not obtained by ping, increasing security.
3. the IP address in upstream is not necessarily an intranet IP address, and the Internet IP address can also be used. However, in the classic case, a certain IP address in the Lan is exposed and the domain name is directly resolved to this IP address. The master server is then forwarded to the IP address of the intranet server.
4. a server goes down without affecting the normal operation of the website, and Nginx does not forward requests to the IP address that is down
Address: http://www.php100.com/html/program/nginx/2013/0905/5525.html
The above introduces the Nginx server load balancer configuration instance details, including the content, hope to be helpful to friends who are interested in the PHP Tutorial.