Cluster Load Balancing Technology

Source: Internet
Author: User
Tags domain name server

At present, the development of business volume in Enterprise Networks, campus networks, and wide area networks, such as the Internet, exceeds the most optimistic estimates in the past. The internet boom is surging and new applications are emerging one after another, even if the network is built according to the optimal configuration at that time, it will soon feel overwhelmed. Especially for the core parts of each network, the large data traffic and computing strength make it impossible for a single device to undertake, however, how can we achieve reasonable service volume allocation among multiple network devices with the same function, so that it does not mean that one device is too busy, but other devices are not able to make full use of the processing capabilities, as a result, the Server Load balancer mechanism came into being. Based on the existing network structure, Server Load balancer provides a cheap and effective method to expand server bandwidth and increase throughput, enhance network data processing capabilities, and improve network flexibility and availability. It mainly implements the following tasks: solving network congestion problems, providing nearby services to achieve geographic location independence, providing users with better access quality, and improving server response speed; this improves the utilization efficiency of servers and other resources, and avoids spof in key parts of the network. In fact, Server Load balancer is not a traditional "Server Load balancer". Generally, Server Load balancer only distributes loads that may be congested in one place to multiple places for sharing. It may be better understood to call it "Load Balancing. To put it bluntly, the role of Server Load balancer in the network is like the rotation duty system, which assigns tasks to everyone to complete, so as to avoid exhausting a person. However, in this sense, the balance is generally static, that is, the pre-determined "rotation" strategy. Different from the rotation duty system, Dynamic Load Balancing uses some tools to analyze data packets in real time, master the data traffic conditions in the network, and allocate tasks reasonably. The structure is divided into local Server Load balancer and regional Server Load balancer (Global Server Load balancer, the latter type refers to load balancing between different geographic locations and different networks and Server clusters. In the server cluster, each service node runs an independent copy of a required server program, such as a web, FTP, telnet, or e-mail server program. For some services (such as those running on Web servers), a copy of the program runs on all hosts in the cluster, network Load Balancing distributes workloads among these hosts. For other services (such as e-mail), only one host processes the workload. For these services, network load balancing allows the network traffic to flow to one host, when the host fails, the communication volume is moved to another host. Server Load balancer technology is built on the existing network structure. Server Load balancer provides a cheap and effective method to expand server bandwidth and increase throughput, enhance network data processing capabilities, and improve network flexibility and availability. It mainly performs the following tasks: ◆ solves the network congestion problem and provides nearby services, achieving geographic location independence ◆ providing users with better access quality ◆ improving Server Response Speed ◆ improving utilization efficiency of servers and other resources ◆ avoiding single point of failure in key parts of the network in a broad sense server Load balancer can be configured with dedicated gateways and server load balancers, it can also be implemented through some specialized software and protocols. SLB applications on a network are analyzed based on network bottlenecks at different levels. Start from the vertical Analysis of the client application, refer to the OSI layered model, we divide Server Load balancer technology into Client Server Load balancer technology, application server technology, high-level protocol exchange, network access protocol exchange, and other methods. Server Load balancer layer ◆ client-based Server Load balancer mode refers to the process of running a specific program on the client of the network. The program collects the running parameters of the server group periodically or irregularly: dynamic information such as CPU usage, disk Io, and memory. Then, based on a certain selection policy, find the best server that can provide services and send local application requests to it. If the server fails, find another alternative server. The entire process is completely transparent to applications, and all work is processed at runtime. Therefore, this is also a dynamic load balancing technology. However, this technology is universal. This special collection program must be installed on each client. To ensure transparent operation at the application layer, you must modify each application by dynamically linking the library or embedding the program, the client's access request can be first sent to the server through the collection program to redirect the process. Code re-development is almost required for each application, and the workload is heavy. Therefore, this technology is only used in special application scenarios. For example, when executing some proprietary tasks, it requires distributed computing capabilities and does not have many requirements for application development. In addition, the Java architecture model is often used to achieve distributed load balancing. Because Java applications are based on virtual machines, an intermediate layer can be designed between the application layer and virtual machines, handle the work of Server Load balancer. ◆ If the server Load balancer technology of the Application Server migrates the Server Load balancer layer of the client to an intermediate platform to form a three-tier structure, the client application does not need to be modified, transparently balance requests to the corresponding service nodes through the middle layer application server. A common implementation method is reverse proxy technology. The reverse proxy server can evenly forward requests to multiple servers, or directly return cached data to the client. This acceleration mode can improve the access speed of static webpages to a certain extent, to achieve the goal of Server Load balancer. The advantage of using reverse proxy is that it can combine Server Load balancer with the high-speed cache technology of the proxy server to provide beneficial performance. However, it also has some questions. First, you must develop a reverse proxy server for each service. This is not an easy task. Although the reverse proxy server itself can achieve high efficiency, for each proxy, the proxy server must maintain two connections, one external connection and one internal connection, therefore, the load on the proxy server is very large for extremely high connection requests. The reverse proxy can execute the load balancing policy optimized for the application protocol, and only access the idle internal server at a time to provide services. However, as the number of concurrent connections increases, the load on the proxy server itself becomes very large, and the reverse proxy server itself becomes a service bottleneck. ◆ Domain name-Based Server Load balancer (NCSA) scalable Web is the first web system to use dynamic DNS round robin technology. Configure the same name for multiple addresses in DNS. Therefore, the client that queries this name will obtain one of the addresses, so that different customers can access different servers and achieve load balancing. This technology has been used in many well-known web sites, including early Yahoo sites and 163. Dynamic DNS round robin is easy to implement without complicated configuration and management. Generally, Unix-like systems such as bind8.2 and later can run, because it is widely used. DNS load balancing is a simple and effective method, but there are many problems. First, the Domain Name Server cannot know whether the service node is valid. if the service node fails, the remainder system will still resolve the domain name to the node, resulting in invalid user access. Second, because of the TTL (time to live) mark of the DNS data refresh time, once the TTL is exceeded, other DNS servers need to interact with the server to obtain the address data again, you may obtain different IP addresses. Therefore, in order to allow random address allocation, TTL should be kept as short as possible. DNS servers in different places can update the corresponding address to obtain the address randomly. However, setting TTL too short will increase DNS traffic and cause additional network problems. Finally, it cannot distinguish between servers, nor reflect the current running status of servers. When using DNS Server Load balancer, you must ensure that different clients can obtain different addresses evenly. For example, user a may only browse several webpages, while user B may download a lot. Because the domain name system does not have a proper load strategy, it is just a simple round-robin balancing, it is easy to send user a's requests to sites with low loads, while B's requests to sites with heavy loads. Therefore, in terms of Dynamic Balancing features, the dynamic DNS round robin effect is not ideal. ◆ In addition to the above load balancing methods, content exchange technology of high-level protocols also supports Server Load balancer capabilities in the protocol, that is, URL switching or layer-7 switching, provides a high-level access traffic control method. The Web content exchange technology checks all HTTP headers and executes load balancing decisions based on information in the headers. For example, you can determine how to provide services for personal homepage, image data, and other content based on the information. Common examples include HTTP redirection capabilities. HTTP runs on the top layer of the TCP connection. The client directly connects to the server through the TCP Service with a constant port number 80, and then sends an HTTP request to the server through the TCP connection. Protocol exchange controls the load based on the content policy, instead of the TCP port number, so it does not cause access traffic to be stranded. The server Load balancer device allocates incoming requests to multiple servers. Therefore, the Server Load balancer device can only establish a TCP connection and determine how to balance the load when the HTTP request passes. When a website hits hundreds or even thousands of times per second, the analysis of TCP connections, HTTP header information, and process latency have become very important, we need to do everything possible to improve the performance of these parts. There is a lot of useful information for load balancing in HTTP requests and headers. We can get the URL and webpage requested by the client from this information. With this information, the load balancing device can direct all the image requests to an image server, or call the CGI program based on the URL's database query content to direct the request to a dedicated high-performance database server. If the network administrator is familiar with the content exchange technology, he can use the web content exchange technology based on the cookie field in the HTTP header to improve the service for specific customers. If he can find some rules from the HTTP request, it can also be used to make various decisions. In addition to the problem of TCP connection tables, how to find appropriate HTTP header information and make load balancing decisions is an important issue affecting the technical performance of web content exchange. If the Web server has been optimized for special features such as image service, SSL conversation, and Database Transaction Service, using this layer of traffic control can improve network performance. ◆ Network access protocol exchange large networks are generally composed of a large number of dedicated technical devices, such as firewalls, routers, 3rd, layer-4 switches, Server Load balancer devices, buffer servers, and web servers. How to organically combine these technical devices is a key issue that directly affects network performance. Currently, many vswitches provide layer-4 switching functions, provide a consistent IP address, and map multiple internal IP addresses. For each TCP and UDP connection request, according to the port number, dynamically select an internal address according to the specified policy to forward data packets to this address.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.