Application analysis of dynamic Network Load Balancing cluster practice

Source: Internet
Author: User
Keywords nbsp Network Load Balancing algorithm can load balancing

Network Load Balancing allows you to propagate incoming requests to up to 32 servers that can use up to 32 servers to share external network request services.

Network Load Balancing technology ensures that they can respond quickly even in heavy loads. Network Load Balancing must provide only one IP address (or domain name) externally. If one or more servers in Network Load Balancing are unavailable, the service is not interrupted.

Network Load Balancing automatically detects that the server is unavailable and can quickly reassign client traffic to the remaining servers. This protection will help you provide uninterrupted service for your critical business processes. You can increase the number of Network Load Balancing servers based on increased network traffic. Network Load Balancing can be implemented on a regular computer. In Windows Server 2003, Network Load Balancing applications include Internet Information Services (IIS), ISA Server 2000 firewall and proxy servers, VPN virtual private networks, terminal servers, Windows Media Services (Windows video-on-demand, video broadcast). At the same time, Network Load Balancing can help improve your server performance and scalability to meet the growing demand for internet-based clients.

Network Load Balancing allows clients to access the cluster with a logical Internet name and virtual IP address (also known as a cluster IP address), while retaining the respective names of each computer.

Because of this, Network Load Balancing technology has been very rapid development recently. In the following article, the editor will give you a brief overview of how Network Load Balancing works and three common ways of working.

This paper mainly discusses the basic balance algorithm and dynamic load balancing mechanism under the Network Load Balancing cluster system. On the basis of LVS, the dynamic negative feedback mechanism of the cluster is realized with the polling algorithm, and a basic dynamic equilibrium model is given and analyzed.

1. Introduction

?? In essence, Network Load Balancing is an implementation of distributed job scheduling system. As the controller of Network request allocation, the equalizer uses centralized or distributed policy to allocate network service requests according to the current processing capability of cluster nodes, and monitors the effective state of each node in the lifecycle of each service request. Generally speaking, the equalizer has the following characteristics for the scheduling of requests:

Network SERVICE requests must be manageable
The requested assignment is transparent to the user
It is best to provide support for heterogeneous systems
Ability to dynamically allocate and adjust resources based on cluster nodes
The load balancer allocates workloads or network traffic across the cluster's service nodes. You can statically set up or determine which specific node the load is distributed to, depending on the current network state, and the nodes can connect to each other within the cluster, but they must be directly or indirectly connected to the equalizer.

?? Network balancer can be regarded as the job scheduling system on the network level, most network load balancer can realize a single system image at the corresponding level of the network, the whole cluster can be embodied as a single IP address is accessed by the user, and the specific service node is transparent to the user. Here, the balancer can be statically or dynamically configured, using one or more algorithms to determine which node obtains the next Network service request.

2. The principle of network balance

?? In the TCP/IP protocol, the data contains the necessary network information, so the information of the packet is very important in the realization algorithm of the network cache or network balance. However, because packets are group-oriented (IP) and connection-oriented (TCP) and are often fragmented, there is no full information about the application, especially the state information related to the connection session. Therefore, you must look at the packet from the point of view of the connection-the connection from the port of the source address to the destination port.

?? Another element of balanced consideration is the resource usage state of the node. Because load balancing is the ultimate goal of this kind of system, it is another key problem that the dynamic load Balancing cluster system of network dynamically adjusts the load balance task distribution according to the current resource usage state of each node.

?? In general, Cluster service nodes can provide such things as processor load, application load, active users, available network protocol caches, and other resource information. Information is passed to the balancer through an efficient message mechanism, and the balancer monitors the status of all processing nodes and proactively decides who the next task is to pass. A balancer can be a single device or a set of devices that are parallel or tree-like.

3. Basic Network Load Balancing algorithm

?? The design of the balance algorithm directly determines the performance of the cluster in load balancing, and the poorly designed algorithm will result in the load unbalance of the cluster. The main task of the general balancing algorithm is to decide how to select the next cluster node and forward the new service request to it. Some simple balancing methods can be used independently, and some must be combined with other simple or advanced methods. And a good load balancing algorithm is not omnipotent, it generally only in some special application environment to play the most effective. Therefore, when we examine the load balancing algorithm, we should pay attention to the application surface of the algorithm itself, and take the cluster deployment as a comprehensive consideration according to the characteristics of the cluster, and combine different algorithms and techniques.

3. 1 Rotation method:

?? The rotary algorithm is the simplest and easiest way to implement all scheduling algorithms. In a task queue, each member (node) of the queue has the same status, and the rotation method simply selects the group of members in the order of rotation. In a load-balanced environment, the equalizer shifts the new request to the next node in the node queue, so continuously and periodically that each cluster node is selected alternately under Equal status. This algorithm is widely used in DNS domain name polling.

?? The activity of the Rotary method is predictable, and each node is chosen to be 1/n, so it is easy to compute the load distribution of the node. The Rotary method is typical for all the nodes in the cluster processing ability and performance of the same situation, in practical applications, it is generally used in conjunction with other simple methods is more effective.

3. 2 Hashing method

?? Hashing is also called hash, which sends a network request to a cluster node by a single shot irreversible hash function, according to some rule. Hashifa will show special power when other kinds of balancing algorithms are not very effective. For example, in the case of the UDP session mentioned earlier, because of the rotation method and several other kinds of algorithms based on the connection information, it is not possible to recognize the start and end tag of the session, which can cause application confusion.

?? A hash map based on the packet source address can solve the problem to a certain extent: sending packets with the same source address to the same server node, which allows transactions based on high-level sessions to run in an appropriate manner. In contrast, the hash scheduling algorithm based on the destination address can be used in the Web cache cluster, and the access requests to the same target site are sent to the same cache service node by the load balancer to avoid the update cache problem brought by the page missing.

3. 3 Least connection method

?? In the least-connected method, the Balancer records all active connections and sends the next new request to the node that currently contains the least number of connections. This algorithm is for TCP connections, however, since the consumption of system resources by different applications may vary widely, and the number of connections does not reflect the actual load of the application, when using a heavy Web server as a cluster node service (for example, the Apache server), the algorithm has a discount on the effect of balancing the load. To reduce this adverse effect, you can set the maximum number of connections per node (represented by thresholds).

3. 4 Minimum Missing method

?? In the minimum deletion method, the Balancer records the request of each node for a long time, and sends the next request to the node with the least request in history. Unlike the least-connected method, the minimum number of missing records is the number of connections in the past instead of the current connection.

3. 5 Quickest Response method

?? The Balancer records its own network response time to each cluster node and assigns the next incoming connection request to the node with the shortest response time, which requires the use of ICMP packets or proprietary techniques based on UDP packets to proactively detect each node.

?? In most lan-based clusters, the fastest response algorithm does not work very well, because the ICMP packets in the LAN are basically within 10MS to complete the response, does not reflect the differences between the nodes; if the balance on the WAN, response time for users to select the nearest server is still realistic And the more decentralized the topology of the cluster, the more effective the method is. This approach is the main method used for advanced balancing based on topology redirection.

3. 6 Weighting method

?? Weighting methods can only be combined with other methods, which is a good complement to them. The weighted algorithm forms a load-balanced, multiple-priority queue based on the priority of the node or the current load state (that is, the weights), and each pending connection in the queue has the same processing level, so that the same queue can be balanced by the previous rotation method or the least connection method, The queues are balanced in priority order. Here the weights are based on an estimate of each node's ability.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.