Introduction to important Nginx algorithms and nginx Algorithms

Source: Internet
Author: User

Introduction to important Nginx algorithms and nginx Algorithms
1. Consistent Hash Algorithm

Consistent Hash algorithms are one of the most critical algorithms in modern system architecture and are widely used in distributed computing systems, distributed storage systems, data analysis, and many other fields.

The key to the hash algorithm is that it can generate a series of different hash values based on different attribute data and convert the hash value 0-232? 1 The integer in the range (that is, the ring in the middle) can also perform hash calculation (usually the IP address and open port of the server) on one or some attributes of a server ), and it is calculated based on a point distributed on this ring. That is, the blue point on the ring in the figure. A request can also be hashed based on one or some attributes of the request (it can be the request's IP address, port, cookie value, URL value, or request time ), in addition, the computing records are recorded at a point distributed on this ring. That is, the yellow point on the ring. We agree that all the requests that are represented by the yellow points on the left of A and the right of B are processed by the servers represented by blue point, in this way, the problem "who will handle it" is solved. When the blue point is stable, all requests from the same Hash are located at the same location, which ensures the stability of Service Processing ing. When a blue point is deprecated for some reason, the affected yellow points are also limited. That is, the next client request will be processed by the server represented by other blue points. 2. Round Robin and Weighted Round Robin

When a task needs to be transferred to a lower-layer node for processing, the task source points are allocated to the lower-Layer Nodes in a fixed order. If the number of available nodes in the lower-layer is X, then the allocation rule for the nth task is:
Target node = (NmodX) + 1 Round Robin is embodied in many architectural ideas: when DNS resolves multiple IP addresses, when LVS forwards messages downward, when Nginx forwards messages downward, and when Zookeeper assigns tasks to the computing node. Understanding the basic round robin process helps us to learn from our ideas when designing the software architecture. However, the preceding round robin method is flawed. Due to various objective reasons, we may not be able to ensure that the processing capabilities of the task processing nodes are the same (CPU, IO, memory frequency, and so on ). Therefore, node A can process 100 tasks at the same time, but Node B may only process 50 tasks at the same time. In this case, we need to set the weights based on one or more attributes of the lower-level nodes. This attribute may be the network bandwidth, CPU busy, or a fixed weight.

So what is the basis for Weighted Round Robin allocation? There are many allocation bases, such as the probability algorithm (which includes the Monte Carlo algorithm, the Las Vegas algorithm and the schedon algorithm, which have many materials to introduce on the Network) and the maximum common number method. Here we will introduce the maximum common divisor algorithm, because this method is simple and practical:

First, calculate the weights of each processing node according to certain rules. The calculation rules mentioned above may be the CPU utilization, network usage, or fixed weights of the service node in the configuration file. Calculate the maximum public approx. the weights of the three nodes are 100, 80, and 60. then the maximum number of common appointments is 20 (if you forget the definition of the maximum common appointment, please review it on your own ). The Division results of the three nodes are 5, 4, and 3 respectively, and the sum value is 12. After obtaining the above calculation result, you can start request allocation. The formula is as follows:
(NmodX) + 1 = Y

N indicates the current N tasks, X indicates the sum result after division, and Y indicates the processing node.

To sum up: Weighted Round Robin (WRR) is a supplement to the round robin scheme. By converting the attributes of processing nodes into weights, the processing capabilities of processing nodes can be effectively described to achieve more scientific task allocation. The key to Weighted Round Robin is the weighted algorithm, which is simple and practical and highly efficient in positioning.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.