Load Balancing configuration and usage

Source: Internet
Author: User
Tags nginx server nginx load balancing

Understanding of Nginx Load Balancing

Nginx is a lightweight, high-performance webserver, he can mainly do the following two things:

As an HTTP server (as with Apache)
Load balancing as a reverse proxy server

Now Nginx everywhere can be seen, often see the page after the outage will show Nginx words, which also shows that nginx because of high performance, use configuration simple, open source single These features are more and more users accept, used.

One of the first as an HTTP server, combined with the PHP-FPM process, the request sent to the processing, nginx itself will not parse PHP, he just as a server, accept the request from the client, if it is a PHP request, then to the PHP process processing, and send the results of PHP processing to the client. This is very simple, install the NGINX+PHP-FPM after the configuration of the respective configuration file, start can be achieved. The principle of operation can be seen in the following paragraph explanation:

Nginx does not support direct invocation or parsing of external programs, and all external programs (including PHP) must be called through the FastCGI interface. The FastCGI interface is a socket under Linux (the socket can be either a file socket or an IP socket). In order to invoke a CGI program, you also need a fastcgi wrapper (wrapper can be understood as the program used to start another program), which is bound to a fixed socket, such as a port or a file socket. When Nginx sends the CGI request to the socket, through the FastCGI interface, the wrapper receives the request, and then derives a new thread, which invokes the interpreter or the external program to process the script and reads the return data; The wrapper then passes the returned data through the FastCGI interface, passing it to nginx along a fixed socket, and finally, Nginx sends the returned data to the client. This is the whole process of nginx+fastcgi, as shown.

Reverse proxy is the opposite of the proxy (or proxy), you have to listen to the proxy, in order to more convenient access to B resources, through a resource indirect access to B resources, the characteristics of users know what the final site to visit is what, but the reverse proxy users do not know what to do behind the proxy server, Reverse proxy in the service of the real processing server in the intranet, the external network can only access the reverse proxy server, which greatly improves security.


Load balancing is our big traffic site to do a thing, let me introduce you to the Nginx server load Balancing configuration method, I hope to have the necessary students help OH.

Load Balancing

Let's start with a quick look at what is load balancing, which is interpreted literally to explain the average load sharing of n servers, not because a server is under-loaded and idle on a single server. Then the premise of load balancing is to have more than one server to achieve, that is, more than two units can be.

Test environment
Because there is no server, this test directly host the domain name, and then installed in VMware three CentOS.

Test Domain name: a.com

A server ip:192.168.5.149 (master)

b Server ip:192.168.5.27

C Server ip:192.168.5.126

Deployment Ideas
A server as the primary server, the domain name directly resolves to a server (192.168.5.149), by a server load balancer to B server (192.168.5.27) and C server (192.168.5.126).


Domain Name resolution

Because it is not the real environment, the domain name will use a a.com as a test, so a.com resolution can only be set in the Hosts file.

Open: C:windowssystem32driversetchosts

Add at the end

192.168.5.149 a.com

Save exit and then start command mode ping to see if it has been set successfully

From the above, we have successfully resolved the a.com to 192.168.5.149IP.

A server nginx.conf settings
Open nginx.conf, and the file location is in the Conf directory of the Nginx installation directory.

Add the following code to the HTTP segment

Upstream A.com {
Server 192.168.5.126:80;
Server 192.168.5.27:80;
}

server{
Listen 80;
server_name a.com;
Location/{
Proxy_pass http://a.com;
Proxy_set_header Host $host;
Proxy_set_header X-real-ip $remote _addr;
Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
}
}

Save Restart Nginx

B, c server nginx.conf settings
Open Nginx.confi and add the following code to the HTTP segment

server{
Listen 80;
server_name a.com;
Index index.html;
root/data0/htdocs/www;
}

Save Restart Nginx

Test
When accessing a.com, in order to distinguish between which server to handle I write a different content under the B, C server index.html file, to make a distinction.

Open the browser to access the A.com results, the refresh will find all the requests are allocated to the host server (192.168.5.149) on the B server (192.168.5.27) and C server (192.168.5.126), the load balancing effect.

B Server Processing page

C Server Processing page

What if one of the servers goes down?
When a server goes down, does it affect access?

Let's take a look at the example above, assuming that the C server 192.168.5.126 the machine is down (because it can't emulate the outage, so I shut down the C server) and then come back to see it.

Access results:

We found that although the C server (192.168.5.126) was down, it did not affect website access. This way, there is no fear of dragging the entire site in a load-balancing mode because of a machine outage.

What if B.Com also set up load balancing?
Very simple, as with the a.com settings. As follows:

Assuming B.Com's primary server IP is 192.168.5.149, load-balanced on 192.168.5.150 and 192.168.5.151 machines

The domain name B.Com is now resolved to 192.168.5.149IP.

Add the following code to the nginx.conf of the primary server (192.168.5.149):

Upstream B.Com {
Server 192.168.5.150:80;
Server 192.168.5.151:80;
}

server{
Listen 80;
server_name B.Com;
Location/{
Proxy_pass http://b.com;
Proxy_set_header Host $host;
Proxy_set_header X-real-ip $remote _addr;
Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
}
}
Save Restart Nginx

Set Nginx on 192.168.5.150 and 192.168.5.151 machine, open nginx.conf Add the following code at the end:

server{
Listen 80;
server_name B.Com;
Index index.html;
root/data0/htdocs/www;
}

Save Restart Nginx

After completing the next steps, you can implement the B.Com load balancer configuration.

Does the primary server not provide services?
In the above example, we are all applied to the primary server load balancer to other servers, then the primary server itself can also be added to the server list, so that will not be wasted to take a server purely as a forwarding function, but also participate in the provision of services.

If the above case three servers:

A server ip:192.168.5.149 (master)

b Server ip:192.168.5.27

C Server ip:192.168.5.126

We resolve the domain name to a server, and then forwarded by a server to B server and C server, then a server only do a forwarding function, now we let a server also provide site services.

Let's start by analyzing the following two scenarios that can occur if you add a master server to upstream:

1, the primary server forwarded to the other IP, the other IP server normal processing;

2, the primary server forwarded to its own IP, and then into the primary server assigned IP there, if it has been assigned to this machine, it will cause a dead loop.

How to solve this problem? Since port 80 is already being used to monitor the processing of load balancing, it is no longer possible to use the 80 port on the server to process a.com access requests, using a new one. So we add the nginx.conf of the main server to the following code:

server{
Listen 8080;
server_name a.com;
Index index.html;
root/data0/htdocs/www;
}

Restart Nginx, in the browser input a.com:8080 try to see if you can access. Results can be accessed normally

Since the normal access, then we can add the main server to upstream, but the port to change, the following code:

Upstream A.com {
Server 192.168.5.126:80;
Server 192.168.5.27:80;
Server 127.0.0.1:8080;
}

Since the main server can be added here IP192.168.5.149 or 127.0.0.1 can both represent access to themselves.

Restart Nginx, and then visit a.com to see if it will be assigned to the primary server.

The primary server is also able to join the service normally.

At last
One, load balancing is not nginx unique, famous dingding Apache also have, but performance may not be as nginx.

Second, more than one server to provide services, but the domain name only resolves to the primary server, and the real server IP will not be ping can be obtained, add a certain security.

Third, the IP in the upstream is not necessarily the intranet, the external network IP can also. However, the classic case is that a LAN in an IP exposure network, the domain name directly resolved to this IP. The primary server is then forwarded to the intranet server IP.

Four, a server outage, will not affect the normal operation of the site, Nginx will not forward the request to the down IP

Load Balancing configuration and usage

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.