Introduction to basic functions of Nginx load balancing, nginx Load Balancing

Source: Internet
Author: User

Introduction to basic functions of Nginx load balancing, nginx Load Balancing

Anyone familiar with Nginx knows that Nginx is a very good Load balancer. In addition to common Http load balancing, Nginx can also achieve Email and FastCGI load balancing, and even support Load Balancing for various applications based on Tcp/UDP protocol (such as MySQL, DNS ). These functions are implemented in different Nginx modules. Server Load balancer can be seen as a service provided by Nginx.

Let's briefly introduce the basic functions of Nginx Server Load balancer. In addition, we will also list Nginx Plus in the following introduction (Nginx expansion board, some functions are charged)

Introduction

Load Balancing among multiple application instances is a commonly used technology that optimizes resource utilization, maximizes throughput, reduces latency, and ensures Fault Tolerance configuration. Nginx can be used as an efficient Http Load balancer to distribute traffic to each server, thus improving performance and increasing scalability and reliability.

Simple configuration

The basic configuration of Server Load balancer is very simple. In terms of basic configuration, you can add more commands to meet your individual needs.

As follows:

http {    upstream myapp1 {        server srv1.example.com;        server srv2.example.com;        server srv3.example.com;    }    server {        listen 80;        location / {            proxy_pass https://myapp1;        }    }}

Above, all requests will be proxies to the server group myapp1, with three server srv1-srv3 on myapp1, which will be apportioned. If the server Load balancer method is not specified, the default method is round-robin.

In nginx, HTTP, HTTPS, FastCGI, uwsgi, and SCGI can both be used as reverse proxies and load balancing. The above example is http. To Use https load balancing, simply change http to https.

Common Load Balancing Algorithms

Load balancing method,

It is the algorithm for how Http requests are allocated to each server. Common Load Balancing algorithms include the following:

Round? The default method of Robin is also the simplest one. That is, Http requests are allocated once according to the order listed in the server list. In this mode, Least Connections sends each request to the current server with the minimum valid number of Connections, of course, the weight will also be taken into account. For example, if there are currently three servers A/B/C, the current number of connections is 100/200/300, then the next request will be allocated to server A for processing the Hash user to define A Hash Key value, such as IP address or URL, to map the Hash Key to the server, each request will be assigned to the same server IP Hash Based on the ing (only applicable to Http Server Load balancer ), allocate requests based on the first three bytes of the Client IP address (for example, if the IP address is 10.25.2.10, use 10.25.2 for ing). This is similar to the previous Least Time, that is, the minimum Time. The new request will be sent to the upstream server with the fastest response time and least connections. This method is available only for Nginx Plus.

Least join algorithm

This is the Least Connections algorithm. The minimum connection, as the name implies, is the number of connections that the current user has at least requests. This is a relatively fair way to prevent some servers from being overloaded and allocating requests to relatively idle servers. The basic configuration is as follows:

upstream myapp1 {        least_conn;        server srv1.example.com;        server srv2.example.com;        server srv3.example.com;}

The least_conn command must be specified.

Session consistency

If server Load balancer uses the round-robin or least-connected algorithm, different requests sent from the same client may be processed by different servers, in this case, the session consistency of the two requests cannot be guaranteed.

To solve this problem, we can adopt the third load balancing algorithm, namely ip-hash. With an IP hash, a correspondence relationship is established between the client's IP address and several servers in the server group list. Therefore, each client's request can only be allocated to one server, this ensures session consistency. The ip-hash method is configured as follows:

upstream myapp1 {    ip_hash;    server srv1.example.com;    server srv2.example.com;    server srv3.example.com;}
Load Balancing Weight

Server Load balancer does not mean that the requests allocated by each server are identical. The servers in each server group discussed earlier are in fact equal status and benefits. In fact, because the features of each server may be different, some servers have good hardware conditions and high stability, so they should handle more requests. On the contrary, other unstable servers should assign fewer requests as appropriate. We can assign different weights to these servers to define the importance of their roles when processing requests. The weight is expressed by the weight command. A higher weight indicates a higher probability of selection. A lower weight indicates a lower probability of selection, and a weight of 0 indicates that selection is never performed. Take the round-robin algorithm as an example:

upstream myapp1 {        server srv1.example.com weight=3;        server srv2.example.com;        server srv3.example.com;}

The default value is 1 without the weight command. If five requests are sent, srv1 can receive three requests, one for each of srv2 and one for srv3.

Server Health Check

Various implementations of reverse proxy (such as http/https/FastCGI) can also perform health checks on each server. If a server error is requested (for example, if 500 is returned, how can it be "invalid" and expanded in Nginx Plus), nginx marks the server as invalid, in the next request, this server will be avoided. How long does this end take? The max_fails and fail_timeout parameters are defined.

Max_fails

The default value is 1, indicating the number of failed accesses to a server within the fail_timeout period, even if this server is officially invalid (you always have to give people the chance to show it several times), it is one time by default;

Fail_timeout

The default value is 10 s, which has two meanings: one is to set a time range for the max_fails command, and the other is if the server has been marked as invalid, after this time, you should assign a request to test whether the server is available (you generally give people the opportunity to be a new person ). If it is still unavailable, the server will continue to be marked as an Invalid server. If it is available, it will be marked as active again. In the following request, distribute requests to round-robin, ip-hash, and other algorithms and weights.

In addition to these commands, proxy_next_upstream, backup, down, And keepalive also have different limits on the load balancing function.

These functions are basically provided in the free version of Nginx. In fact, there are still many topics in Server Load balancer. In the next article, we will talk about the more abundant Load Balancing functions provided by Nginx Plus.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.