What does Nginx do with load balancing? Summary of Nginx Load Balancing algorithm (attached code)

Source: Internet
Author: User
Tags crc32 nginx load balancing
How does the Nginx load balancer do it? In fact, nginx load balancing has a lot of methods can be achieved, the following I will give you a concrete introduction of the Nginx load balancing algorithm, including polling, Weight,ip_hash,fair and Url_hash five algorithms.

One, Nginx load balancing algorithm

1. Polling (default)

Each request is assigned to a different backend service in chronological order, and if a server in the back end freezes, the failed system is automatically rejected, so that user access is not affected.

2. Weight (polling weights)

The larger the value of the weight, the higher the access probability assigned to it, which is mainly used for the performance imbalance of each server in the backend. Or simply to set different weights in the case of the master and slave, to achieve a reasonable and efficient use of host resources.

3, Ip_hash

Each request is allocated according to the hash result of the access IP, so that visitors from the same IP have fixed access to a back-end server, and can effectively solve the session sharing problem of dynamic Web pages.

4, fair

A more intelligent load balancing algorithm than weight and Ip_hash, the fair algorithm can intelligently load balance based on the size of the page and the length of time it takes to load, that is, to allocate requests based on the response time of the backend server, with short response times. Nginx itself does not support fair, and if this scheduling algorithm is required, the Upstream_fair module must be installed.

5, Url_hash

The efficiency of the back-end cache server can be further improved by assigning requests to the hash results of the URLs accessed, and directing each URL to a back-end server. Nginx itself does not support url_hash, if you need this scheduling algorithm, you must install Nginx hash package.

1. Polling (default)

Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.

2.weight

Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.
For example:

Upstream Bakend {  server 192.168.0.14 weight=10;  Server 192.168.0.15 weight=10;  }

3.ip_hash

Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.
For example:

Upstream Bakend {  ip_hash;  Server 192.168.0.14:88;  Server 192.168.0.15:80;  }

4.fair (third Party)

The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.

Upstream backend {  server server1;  Server Server2;  Fair;  }

5.url_hash (third party)

Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.
Example: Add a hash statement in upstream, the server statement can not write weight and other parameters, Hash_method is the use of the hash algorithm

Upstream backend {  server squid1:3128;  Server squid2:3128;  Hash $request _uri;  Hash_method CRC32;

Second, Nginx load balancing scheduling state

In the Nginx upstream module, you can set the state of each back-end server in load balancing scheduling, and the commonly used states are:

1, down, indicates that the current server is temporarily not participating in load balancing

2, Backup, reserved machine. When all other non-backup machines fail or are busy, the backup machine is requested, so the machine has the lowest access pressure

3, Max_fails, the number of times the request failed to be allowed, the default is 1, and when the maximum number of times is exceeded, the error defined by the Proxy_next_upstream module is returned.

4, Fail_timeout, request failure time-out, after the max_fails failed, the time to pause the service. Max_fails and fail_timeout can be used together.

If Nginx does not only have to proxy a server, it is not as hot as today, Nginx can be configured to proxy multiple servers, when a server down, can still keep the system available. The specific configuration process is as follows:

1. Under the HTTP node, add the upstream node.

Upstream LINUXIDC {       server 10.0.6.108:7080;       Server 10.0.0.85:8980; }

2. Configure the Proxy_pass in the location node under the server node to be://+ upstream name, i.e. "
HTTP://LINUXIDC ".

Location/{             root  html;             Index  index.html index.htm;             Proxy_pass HTTP://LINUXIDC; }

3. Load balancing is now initially complete. The upstream is loaded by polling (the default), and each request is assigned to a different back-end server in chronological order, which can be automatically rejected if the backend server is down. Although this method is simple and low cost. However, the disadvantage is low reliability and unbalanced load distribution. Applies to picture server clusters and pure static page server clusters.

In addition, upstream has other allocation strategies, as follows:

Weight (weight)

Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven. As shown below, the 10.0.0.88 access ratio is one-fold higher than the 10.0.0.77 access ratio.

Upstream linuxidc{       server 10.0.0.77 weight=5;       Server 10.0.0.88 weight=10; }

Ip_hash (Access IP)

Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.

Upstream favresin{       Ip_hash;       Server 10.0.0.10:8080;       Server 10.0.0.11:8080; }

Fair (third party)

The response time of the back-end server is allocated to the request, and the response time is short of priority allocation. Similar to the weight assignment policy.

Upstream favresin{            server 10.0.0.10:8080;       Server 10.0.0.11:8080;       Fair }

Url_hash (third Party)

Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.

Note: In upstream the hash statement is added, the server statement cannot write other parameters such as weight, Hash_method is the hash algorithm used.

Upstream resinserver{       server 10.0.0.10:7777;       Server 10.0.0.11:8888;       Hash $request _uri;       Hash_method CRC32; }

Upstream can also set a status value for each device, with the meanings of these status values as follows:

Down indicates that the server before the single is temporarily not participating in the load.

Weight by default, the larger the 1.weight, the greater the load weight.

Max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded.

Fail_timeout:max_fails the time of the pause after the failure.

Backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.

Upstream bakend{#定义负载均衡设备的Ip及设备状态       Ip_hash;       Server 10.0.0.11:9090 down;       Server 10.0.0.11:8080 weight=2;       Server 10.0.0.11:6060;       Server 10.0.0.11:7070 backup; }

If Nginx does not only have to proxy a server, it is not as hot as today, Nginx can be configured to proxy multiple servers, when a server down, can still keep the system available. The specific configuration process is as follows:

1. Under the HTTP node, add the upstream node.

Upstream LINUXIDC {       server 10.0.6.108:7080;       Server 10.0.0.85:8980; }

2. Configure the Proxy_pass in the location node under the server node to be://+ upstream name, i.e. "
HTTP://LINUXIDC ".

Location/{             root  html;             Index  index.html index.htm;             Proxy_pass HTTP://LINUXIDC; }

3. Load balancing is now initially complete. The upstream is loaded by polling (the default), and each request is assigned to a different back-end server in chronological order, which can be automatically rejected if the backend server is down. Although this method is simple and low cost. However, the disadvantage is low reliability and unbalanced load distribution. Applies to picture server clusters and pure static page server clusters.

In addition, upstream has other allocation strategies, as follows:

Weight (weight)

Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven. As shown below, the 10.0.0.88 access ratio is one-fold higher than the 10.0.0.77 access ratio.

Upstream linuxidc{       server 10.0.0.77 weight=5;       Server 10.0.0.88 weight=10; }

Ip_hash (Access IP)

Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.

Upstream favresin{       Ip_hash;       Server 10.0.0.10:8080;       Server 10.0.0.11:8080; }

Fair (third party)

The response time of the back-end server is allocated to the request, and the response time is short of priority allocation. Similar to the weight assignment policy.

Upstream favresin{            server 10.0.0.10:8080;       Server 10.0.0.11:8080;       Fair }

Url_hash (third Party)

Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.

Note: In upstream the hash statement is added, the server statement cannot write other parameters such as weight, Hash_method is the hash algorithm used.

Upstream resinserver{       server 10.0.0.10:7777;       Server 10.0.0.11:8888;       Hash $request _uri;       Hash_method CRC32; }

Upstream can also set a status value for each device, with the meanings of these status values as follows:

Down indicates that the server before the single is temporarily not participating in the load.

Weight by default, the larger the 1.weight, the greater the load weight.

Max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded.

Fail_timeout:max_fails the time of the pause after the failure.

Backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.

Upstream bakend{#定义负载均衡设备的Ip及设备状态       Ip_hash;       Server 10.0.0.11:9090 down;       Server 10.0.0.11:8080 weight=2;       Server 10.0.0.11:6060;       Server 10.0.0.11:7070 backup; }
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.