Research on configuration and deployment of high performance Web server Nginx (upstream) Load balancer module

Source: Internet
Author: User

Nginx's Httpupstreammodule provides simple load balancing for back-end (backend) servers. One of the simplest upstream is written as follows:


Server backend1.example.com;

Server backend2.example.com;

server.backend3.example.com;


Location/{

Proxy_pass Http://backend;

}


1, back-end server


The backend server can be set by upstream, which can be specified by IP address and port, domain name, UNIX socket (socket). Where the domain name can be resolved to multiple addresses, these addresses are used as backend. The following examples illustrate:


Server blog.csdn.net/poechant;

Server 145.223.156.89:8090;

Server Unix:/tmp/backend3;


The first backend is specified with a domain name. The second backend is specified with an IP and a port number. The third backend is specified with a UNIX socket.


2. Load Balancing Strategy


Nginx provides polling (round robin), User IP hash (client IP), and 3 ways of assigning weights.


By default, Nginx will provide you with polling as a load balancing policy. But that doesn't necessarily make you happy. For example, a series of visits within a given period were initiated by Michael, the same user, so the first time Michael's request might be backend2, and the next is Backend3, then Backend1, Backend2, Backend3 ... In most scenarios, this is not efficient. Of course, because of this, Nginx gives you a way to hash the IP of these chaotic users, such as Michael, Jason, David, and so on, so that each client's access request is dumped to the same back-end server. (in addition, due to the recent discovery of a lot of sites to do not leave the original link to steal this Bobovin, so I'm here to plug Benbow address "http://blog.csdn.net/poechant") specific use of the following:


Ip_hash;

Server backend1.example.com;

Server backend2.example.com;

server.backend3.example.com;


In this strategy, the key used for the hash operation is the client's Class C IP address (class C IP address is the range between 192.0.0.0 to 223.255.255.255, the first three segment number represents the subnet, and the fourth number is the IP address class of the local host. Not). This way, one client per request is guaranteed to reach the same backend. Of course, if the hash to the backend is currently unavailable, the request is transferred to the other backend.


Introduce a keyword used with ip_hash: down. When a server has a temporary outage (down), you can use "down" to mark it, and the indicated server will not accept requests for processing. Specific as follows:


Server blog.csdn.net/poechant down ;

Server 145.223.156.89:8090;

Server Unix:/tmp/backend3;


You can also use the specified weight (weight) in the following way:


Server backend1.example.com;

weight=4;


By default, weight is 1, for the above example, the first server weights the default value of 1, the second is 4, so it is equivalent to the first server receives 20% of the request, the second receives 80%. It is important to note that weight and ip_hash cannot be used at the same time, for the simple reason that they are different and conflicting strategies.


3. Retry Policy


You can specify the maximum number of retries, and the retry interval, for each backend. The keywords used are max_fails and fail_timeout. As shown below:


Server backend1.example.com weight=5;

max_fails=3fail_timeout=30s;


In the above example, the maximum number of failures is 3, which is 3 attempts, and the time-out is 30 seconds. The default value for Max_fails is 1, andthe default value for Fail_timeout is 10s. The case of a transmission failure, specified by Proxy_next_upstream or Fastcgi_next_upstream. You can also use Proxy_connect_timeout and proxy_read_timeout to control the upstream response time.


One situation to note is that the max_fails and Fail_timeout parameters may not work when there is only one server in upstream. The problem is that Nginx only tries a upstream request, and if it fails the request is discarded: (...) The solution, compare trickery, is to write a few more times in upstream your pathetic, unique server, as follows:


Server backend.example.com max_fails fail_timeout=30s;

Server backend.example.com max_fails fail_timeout=30s;

Server backend.example.com max_fails fail_timeout=30s;


4. Standby Machine Strategy


Starting with Nginx version 0.6.7, you can use the "Backup" keyword. When all non-standby machines (non-backup) are down or busy (busy), only the standby labeled by Backup is used. It is important to note thatBackup cannot be used with the Ip_hash keyword. Examples are as follows:


Server backend1.example.com;

backup;

Server backend3.example.com;

}

Research on configuration and deployment of high performance Web server Nginx (upstream) Load balancer module

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.