Nginx server load balancer configuration instance details

Source: Internet
Author: User
Tags nginx server

Server load balancer

First, let's take a brief look at what server load balancer is. Simply understanding what it means literally can explain that N servers are equally loaded, A server is not idle because of its high load downtime. The premise of server load balancer is that multiple servers are required to achieve load balancing, that is, two or more servers.

Test Environment
Because there is no server, this test directly specifies the host domain name, and then installs three CentOS in VMware.

Test domain name: a.com

Server a ip address: 192.168.5.149 (master)

Server B IP address: 192.168.5.27

C server IP address: 192.168.5.126

Deployment ideas
Server A serves as the master server, and the domain name is directly resolved to server A (192.168.5.149). Server A is responsible for server B (192.168.5.27) and server C (192.168.5.126.


Domain name resolution

Because it is not a real environment, the domain name can use a.com for testing. Therefore, the resolution of a.com can only be set in the hosts file.

Open: C: WindowsSystem32driversetchosts

Add at the end

192.168.5.149 a.com

Save and exit, and then start command mode ping to see if the setting is successful

 

From the screenshot, a.com is successfully resolved to 192.168.5.149IP.

Nginx. conf settings of server
Open nginx. conf. The file is located in the conf Directory of the nginx installation directory.

Add the following code to the http segment:

Upstream a.com {
Server 192.168.5.126: 80;
Server 192.168.5.27: 80;
}
 
Server {
Listen 80;
Server_name a.com;
Location /{
Proxy_pass http://a.com;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
    }
}

Save and restart nginx

B. Configure nginx. conf on the C server
Open nginx. confi and add the following code to the http segment:

Server {
Listen 80;
Server_name a.com;
Index index.html;
Root/data0/htdocs/www;
}

Save and restart nginx

Test
At the.comtime, I wrote a different index.html file under the B 、c server to differentiate the server handler.

Open the browser to access a.com and refresh the results. All requests are allocated to server B (192.168.5.27) and server C (192.168.5.126) by the master server (192.168.5.149, load balancing is achieved.

Server B processing page

 

Server C processing page

 

What if one of the servers goes down?
When a server goes down, will access be affected?

Let's take a look at the instance. Based on the above example, assume that the host of the C server 192.168.5.126 is down (because it cannot simulate the crash, I will shut down the C server) and then visit it.

Access results:

 

We found that although server C (192.168.5.126) is down, website access is not affected. In this way, the whole site will not be dragged down due to the downtime of a server in server load balancer mode.

What should I do if I want to configure server load balancer for B .com?
It's easy, just like setting a.com. As follows:

Assume that the master server IP address of B .com is 192.168.5.149, and the server load balancer is distributed to machines 192.168.5.150 and 192.168.5.151.

Resolve the domain name B .com to 192.168.5.149IP.

Add the following code to nginx. conf of the master server (192.168.5.149:

Upstream B .com {
Server 192.168.5.150: 80;
Server 192.168.5.151: 80;
}
 
Server {
Listen 80;
Server_name B .com;
Location /{
Proxy_pass http:// B .com;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
    }
}
Save and restart nginx

Set nginx on machines 192.168.5.150 and 192.168.5.151, enable nginx. conf, and add the following code at the end:

Server {
Listen 80;
Server_name B .com;
Index index.html;
Root/data0/htdocs/www;
}

Save and restart nginx

After completing the subsequent steps, you can configure B .com server load balancer.

Does the master server provide services?
In the preceding example, the server load balancer of the master server is applied to other servers. Can the master server itself be added to the server list, in this way, a server is not wasted as a forwarding function, but also involved in the provision of services.

For example, in the above case, three servers:

Server a ip address: 192.168.5.149 (master)

Server B IP address: 192.168.5.27

C server IP address: 192.168.5.126

We resolve the domain name to server A, and then forward the domain name to server B and server C by server A. Then Server A only provides A forwarding function. Now we have Server A provide site services.

Let's analyze it first. If you add the master server to upstream, there may be two situations:

1. The master server is forwarded to other IP addresses, and other IP addresses are processed properly;

2. The master server forwards the IP address to the master server and then allocates the IP address to the master server. If the IP address is always allocated to the local server, an endless loop will occur.

How can this problem be solved? Because port 80 is already used to listen for server load balancer processing, the server cannot use port 80 to process a.com access requests. A new one is required. Therefore, we add the nginx. conf of the master server to the following code:

Server {
Listen 8080;
Server_name a.com;
Index index.html;
Root/data0/htdocs/www;
}
 
Restart nginx and enter a.com: 8080 in the browser to see if it can be accessed. The result can be accessed normally.

 

Since normal access is available, we can add the master server to upstream, but the port should be changed as follows:

Upstream a.com {
Server 192.168.5.126: 80;
Server 192.168.5.27: 80;
Server 127.0.0.1: 8080;
}

You can add the master server IP192.168.5.149 or 127.0.0.1 to access your own IP address.

Restart Nginx, and then visit a.com to see if it will be allocated to the master server.

 

 

The master server can also be added to the service normally.

Last
I. Server load balancer is not unique to nginx. It is also known as apache, but its performance may be inferior to nginx.

2. Multiple servers provide services, but the domain name is resolved only to the master server, and the real server IP address is not obtained by ping, increasing security.

 

3. The IP address in upstream is not necessarily an intranet IP address, and the Internet IP address can also be used. However, in the classic case, a certain IP address in the Lan is exposed and the domain name is directly resolved to this IP address. The master server is then forwarded to the IP address of the intranet server.

4. A server goes down without affecting the normal operation of the website, and Nginx does not forward requests to the IP address that is down

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.