Network Load Balancing Technology at different network layers

Source: Internet
Author: User

When the system administrator finds that the network performance is poor, the system administrator can allocate resources through network load balancing to optimize the resources.

The development of the Internet has brought infinite fun to people's lives and brought infinite convenience to people's work. However, the rapid development of the network also makes people highly dependent on the network.

Today, we are constantly developing applications on the network, and the business traffic on the network is surging. Even if the network construction is advanced, the configuration is reasonable, and the resources are optimized, the system still feels that "network construction is always unable to keep up with the pace of application requirements ". Especially for those core layers of the network, the data traffic and computing strength are high, making it impossible for a single device to take over the task.

Multi-device "Labor error"

On the basis of the existing network, it is undoubtedly a choice to add a certain number of devices and change the load of a single device to multi-device sharing. However, how can we achieve a reasonable traffic distribution between multiple network devices with the same function? In this case, there may be "Labor imbalance" among multiple devices. Therefore, the Server Load balancer mechanism that coordinates the "working intensity" of each device by means of the Network came into being.

It is described in a professional language. Server Load balancer provides a cheap and effective method to build on the existing network structure. To expand server bandwidth and increase throughput, to enhance network data processing capabilities and improve network flexibility and availability.

The main responsibility of Server Load balancer is to provide nearby services to solve network congestion problems. The implementation process is irrelevant to the geographical location. It provides users with better access quality, improve the server response speed and utilization efficiency of servers and other resources, so as to avoid single point of failure in key parts of the network.

Different layers of "cutting-in"

Generally, SLB applications on a network can be switched to different layers of the network ". However, it depends on the specific location of the Network bottleneck.

However, in general, it is achieved through transmission link aggregation technology, higher-level network exchange technology, and server cluster policy setting.

Transmission link aggregation technology

To meet the needs of high-bandwidth applications, more and more PCs are using faster links to connect to the network. In general, the distribution of business volume in the network is difficult to balance, and the phenomenon of "high core, low edge, high key departments, and low general departments" often occurs.

As the computer processing capability is greatly improved, people have higher requirements on the Multi-Working Group LAN processing capability. When the enterprise's internal demand for high-bandwidth applications increases, the data interface at the core of the LAN will cause bottlenecks. The bottleneck prolongs the response time of user application requests, and the LAN is scattered. The network itself does not have protection measures against the server. An unintentional "action" will disconnect the server from the network.

In general, most of the measures used to solve the bottleneck are to increase the capacity of the server link to exceed the current requirements. For some large enterprises, upgrading technology is a long-term and promising solution to the bottleneck.

However, for many enterprises, when the demand is not large enough to invest a lot of money and time to upgrade, the use of upgrade technology is "not economical. In this case, the link aggregation technology provides a low-cost solution to eliminate bottlenecks and insecure factors on the transmission link.

The link aggregation system increases the complexity of the network, but also improves the reliability of the network, so that people can use redundant routes on the lines of key LAN segments such as servers.

VRRP (Virtual routing redundancy protocol) can be used for IP systems ). VRRP can generate a virtual default gateway address. When the master router cannot be connected, the slave router uses this address to continue LAN communication.

High-level exchange technology

Large networks are generally composed of a large number of dedicated technical devices, including firewalls, routers, second/third-layer switches, Server Load balancer devices, buffer servers, and Web servers. How to organically combine these technical devices is a key issue that directly affects network performance.

Currently, many vswitches provide the layer-4 switching function, which can map an external IP address to multiple internal IP addresses. Each TCP connection request is dynamic and an internal IP address is used to achieve load balancing. Some protocols support functions related to Server Load balancer, such as redirection in HTTP.

Server cluster with a balanced policy

A common server can only process tens of thousands to hundreds of thousands of requests per second, and cannot process millions or more requests within one second. However, if a system can be composed of ten such servers and all requests are evenly distributed to all servers through software technology, the system can process millions or more requests every second. This is the initial basic design concept for using Server clusters to achieve load balancing.

The new solution is to translate different IP addresses of multiple server NICs into a Virtual IP Address through LSANT (Load Sharing Network Address Transfer), so that each server is always in the working state. The work originally needed to be done with a minicomputer was completed by multiple PC servers. This elastic solution has a significant effect on investment protection. This not only avoids the huge equipment investment caused by rigid upgrade of minicomputers, but also avoids repeated investment in personnel training. At the same time, service operators can adjust the number of servers at any time according to business needs.

Network performance depends on"

The development of Server Load balancer technology develops with the constant demand of the market. Their functions gradually become more complex and powerful. To sum up, its products and solutions have gone through several generations of Development and deduction.

The first generation of Server Load balancer products is a simple Round-robin DNS machine. It can allocate HTTP processes to several IP hosts. Such systems use simple PING commands to ensure that process requests are not sent to an unsatisfactory server, and a variable is introduced for multiple servers to display the Failover rate.

The second generation of Server Load balancer not only undertakes the task of checking whether the server is running, but also checks the server's performance status. That is to say, if a server is overloaded, incoming requests will be forwarded to other machines to ensure the load is evenly distributed among all available resources.

The third generation SLB product covers the entire content delivery system. As Web and network services become more and more mature, it is not enough to monitor only one layer of Web servers. Instead, it is a load balancing product that can ensure the smooth operation of the entire content delivery system.

Four advantages of network load balancing

1. network load balancing allows users to spread incoming requests to up to 32 servers, that is, a maximum of 32 servers can be used to share external network request services. Network Load Balancing Technology ensures that they can respond quickly even when the load is heavy.

2. Only one IP address (or domain name) must be provided for Internet SLB ).

3. If one or more servers in the Network Server Load balancer are unavailable, the service will not be interrupted. When the Network Server Load balancer automatically detects that the server is unavailable, it can quickly re-assign Client Communication to the remaining servers. This protection can help provide uninterrupted services for key business programs. In addition, you can increase the number of Server Load balancer instances based on the increase in network traffic.

4. network load balancing can be implemented on normal computers.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.