How load balancing is configured on the PHP face test seven Nginx

Source: Internet
Author: User

This article is about the PHP surface test seven of the nginx load balance How to configure, has a certain reference value, now share to everyone, the need for friends can refer to

Load Balancing


There are 4 modes of load Balancing in Nginx:

1), polling (default)
Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.
2),weight
Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.
2),Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.
3),Fair(Third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.
4),Url_hash(third party)

Configuration method:

Open nginx.cnf File

Add the upstream node under the HTTP node:

Upstream WebName {    server 192.168.0.1:8080;    Server 192.168.0.2:8080;  }

Where WebName is the name of his own, and finally through the name in the URL to access, like the above example, nothing is the default polling, the first request came over to access the first server, the second request to access the second server. Turn around in turn.

Upstream WebName {    server 192.168.0.1:8080 weight 2;    Server 192.168.0.2:8080 weight 1;  }

This weight also very good understanding, the weight of the large access to the probability of large, the above example, Access 2 times Server1, visit once Server2

Upstream WebName {    ip_hash;    Server 192.168.0.1:8080;    Server 192.168.0.2:8080;  }

Ip_hash configuration is also very simple, directly add a line, so long as the same IP will come to the same server

Then configure it under the server node:

location/name {      Proxy_pass http://webname/name/;      Proxy_http_version 1.1;      Proxy_set_header Upgrade $http _upgrade;      Proxy_set_header Connection "Upgrade";  }

Proxy_pass with the webname in place of the original IP address.

This basically completes the load balancing configuration.

The following is the configuration of the master standby:

It's still in the upstream.

Upstream WebName {    server 192.168.0.1:8080;    Server 192.168.0.2:8080 backup;  }

Set a node as backup, then generally all requests are accessed by Server1, and when Server1 is hung or busy, it is accessed Server2

Upstream WebName {    server 192.168.0.1:8080;    Server 192.168.0.2:8080 down;  }

Set a node to down, then this server does not participate in the load.

Implementing an instance

Load balancing is our big traffic site to do a thing, let me introduce you to the Nginx server load Balancing configuration method, I hope to have the necessary students help OH.

Load Balancing

Let's start with a quick look at what is load balancing, which is interpreted literally to explain the average load sharing of n servers, not because a server is under-loaded and idle on a single server. Then the premise of load balancing is to have more than one server to achieve, that is, more than two units can be.

Test environment
Because there is no server, this test directly host the domain name, and then installed in VMware three CentOS.

Test Domain name: a.com

A server ip:192.168.5.149 (master)

b Server ip:192.168.5.27

C Server ip:192.168.5.126

Deployment Ideas
A server as the primary server, the domain name directly resolves to a server (192.168.5.149), by a server load balancer to B server (192.168.5.27) and C server (192.168.5.126).

Domain Name resolution

Because it is not the real environment, the domain name will use a a.com as a test, so a.com resolution can only be set in the Hosts file.

Open it:C:WindowsSystem32driversetchosts

Add at the end

192.168.5.149    a.com

Save exit and then start command mode ping to see if it has been set successfully

From the above, we have successfully resolved the a.com to 192.168.5.149IP.

A server nginx.conf settings
Open nginx.conf, and the file location is in the Conf directory of the Nginx installation directory.

Add the following code to the HTTP segment

Upstream a.com {       server  192.168.5.126:80;       Server  192.168.5.27:80;} server{     listen;     server_name a.com;     Location/{         proxy_pass        http://a.com;         Proxy_set_header  Host            $host;         Proxy_set_header  x-real-ip        $remote _addr;         Proxy_set_header  x-forwarded-for  $proxy _add_x_forwarded_for;     }}

Save Restart Nginx

B, c server nginx.conf settings
Open Nginx.confi and add the following code to the HTTP segment

server{     Listen;     server_name a.com;     Index index.html;     root/data0/htdocs/www; }

Save Restart Nginx

Test
When accessing a.com, in order to distinguish between which server to handle I write a different content under the B, C server index.html file, to make a distinction.

Open the browser to access the A.com results, the refresh will find all the requests are allocated to the host server (192.168.5.149) on the B server (192.168.5.27) and C server (192.168.5.126), the load balancing effect.

B Server Processing page

C Server Processing page

What if one of the servers goes down?
When a server goes down, does it affect access?

Let's take a look at the example above, assuming that the C server 192.168.5.126 the machine is down (because it can't emulate the outage, so I shut down the C server) and then come back to see it.

Access results:

We found that although the C server (192.168.5.126) was down, it did not affect website access. This way, there is no fear of dragging the entire site in a load-balancing mode because of a machine outage.

What if B.Com also set up load balancing?
Very simple, as with the a.com settings. As follows:

Assuming B.Com's primary server IP is 192.168.5.149, load-balanced on 192.168.5.150 and 192.168.5.151 machines

The domain name B.Com is now resolved to 192.168.5.149IP.

Add the following code to the nginx.conf of the primary server (192.168.5.149):

Upstream B.Com {       server  192.168.5.150:80;       Server  192.168.5.151:80;} server{     listen;     server_name B.Com;     Location/{         proxy_pass        http://b.com;         Proxy_set_header  Host            $host;         Proxy_set_header  x-real-ip        $remote _addr;         Proxy_set_header  x-forwarded-for  $proxy _add_x_forwarded_for;     }}

Save Restart Nginx

Set Nginx on 192.168.5.150 and 192.168.5.151 machine, open nginx.conf Add the following code at the end:

server{     Listen;     server_name B.Com;     Index index.html;     root/data0/htdocs/www; }

Save Restart Nginx

After completing the next steps, you can implement the B.Com load balancer configuration.

Does the primary server not provide services?
In the above example, we are all applied to the primary server load balancer to other servers, then the primary server itself can also be added to the server list, so that will not be wasted to take a server purely as a forwarding function, but also participate in the provision of services.

If the above case three servers:

A server ip:192.168.5.149 (master)

b Server ip:192.168.5.27

C Server ip:192.168.5.126

We resolve the domain name to a server, and then forwarded by a server to B server and C server, then a server only do a forwarding function, now we let a server also provide site services.

Let's start by analyzing the following two scenarios that can occur if you add a master server to upstream:

1, the primary server forwarded to the other IP, the other IP server normal processing;

2, the primary server forwarded to its own IP, and then into the primary server assigned IP there, if it has been assigned to this machine, it will cause a dead loop.

How to solve this problem? Since port 80 is already being used to monitor the processing of load balancing, it is no longer possible to use the 80 port on the server to process a.com access requests, using a new one. So we add the nginx.conf of the main server to the following code:

server{     listen 8080;     server_name a.com;     Index index.html;     root/data0/htdocs/www; }

Restart Nginx, in the browser input a.com:8080 try to see if you can access. Results can be accessed normally

Since the normal access, then we can add the main server to upstream, but the port to change, the following code:

Upstream a.com {       server  192.168.5.126:80;       Server  192.168.5.27:80;       Server  127.0.0.1:8080;}

Since the main server can be added here IP192.168.5.149 or 127.0.0.1 can both represent access to themselves.

Restart Nginx, and then visit a.com to see if it will be assigned to the primary server.

The primary server is also able to join the service normally.

At last
One, load balancing is not nginx unique, famous dingding Apache also have, but performance may not be as nginx.

Second, more than one server to provide services, but the domain name only resolves to the primary server, and the real server IP will not be ping can be obtained, add a certain security.

Third, the IP in the upstream is not necessarily the intranet, the external network IP can also. However, the classic case is that a LAN in an IP exposure network, the domain name directly resolved to this IP. The primary server is then forwarded to the intranet server IP.

Four, a server outage, will not affect the normal operation of the site, Nginx will not forward the request to the down IP

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.