Application of Server Load balancer Technology

Source: Internet
Author: User

Because the data traffic of the network is mostly concentrated at one end of the central server, load balancing is usually used to balance (or share) the load of the accessed server. Server Load balancer is structured into local Server Load balancer and regional Server Load balancer (Global Server Load balancer, the latter type refers to load balancing between different geographic locations and different networks and Server clusters.

Each host runs an independent copy of a required server program, such as a web, FTP, telnet, or e-mail server program. For some services (such as those running on Web servers), a copy of the program runs on all hosts in the cluster, network Load Balancing distributes workloads among these hosts. For other services (such as e-mail), only one host processes the workload. For these services, network load balancing allows the network traffic to flow to one host, when the host fails, the communication volume is moved to another host.

■ DNS

The earliest load balancing technology was implemented through DNS. In DNS, the same name is configured for multiple addresses. Therefore, the client that queries this name will get an address, in this way, different customers can access different servers to achieve load balancing.

DNS Server Load balancer is a simple and effective method, but it cannot distinguish between servers or reflect the current running status of servers. When using DNS Server Load balancer, you must ensure that different customers' computers can obtain different addresses evenly. Because DNS data has a refresh time mark, once the time limit is exceeded, other DNS servers need to interact with the server to obtain the address data again, and different IP addresses may be obtained. Therefore, in order to allow random address allocation, the refresh time should be as short as possible. DNS servers in different places can update the corresponding address to obtain the address randomly. However, the expiration time should be set too short, this will increase DNS traffic and cause additional network problems. Another problem with DNS load balancing is that, if a server fails, even if the DNS settings are modified in time, it still takes enough time (refresh time) to take effect. During this period, the client computer that saves the address of the faulty server cannot access the server normally.

Despite a variety of problems, it is still a very effective way, including Yahoo, many large websites use DNS.

■ Proxy Server

The proxy server can forward requests to internal servers. Using this acceleration mode can obviously increase the access speed of static Web pages. However, this technology can also be considered to use a proxy server to evenly forward requests to multiple servers, so as to achieve load balancing.

This proxy method is different from the common proxy method. The standard proxy method is that the customer uses the proxy to access multiple external servers. This proxy method is used to proxy multiple clients to access internal servers, it is also called reverse proxy mode. Although the implementation of this task is not very complex, it is not easy to implement because of the high efficiency requirements.

The advantage of using reverse proxy is that it can combine Server Load balancer with the high-speed cache technology of the proxy server to provide beneficial performance. However, it also has some problems. First, it is necessary to develop a reverse proxy server for each service. This is not an easy task.

Although the proxy server itself can achieve high efficiency, for each proxy, the proxy server must maintain two connections, one external connection and one internal connection. Therefore, for extremely high connection requests, the load on the proxy server is very large. In reverse proxy mode, you can apply the optimized load balancing policy to access the idle internal server each time to provide services. However, as the number of concurrent connections increases, the load on the proxy server itself becomes very large, and the reverse proxy server itself becomes a service bottleneck.

■ Address translation gateway

The address translation gateway supports Server Load balancer. You can map an external IP address to multiple internal IP addresses and dynamically use one of the internal addresses for each TCP connection request to achieve Server Load balancer. Many hardware vendors integrate this technology into their vswitches as a function of layer-4 switching, generally, the Server Load balancer policy is randomly selected and assigned based on the server connection quantity or response time. Because address translation is relatively close to the lower layer of the network, it is possible to integrate it into hardware devices. Generally, such hardware devices are LAN switches.

The so-called layer-4 Switching Technology of LAN switches is to exchange virtual connections based on IP addresses and TCP ports, and send data packets directly to the corresponding ports of the target computer. Through a vswitch, You can associate the initial external connection requests with multiple internal addresses, and then exchange these established virtual connections. Therefore, some LAN switches with layer-4 Switching capabilities can be used as a hardware Load balancer to achieve server load balancing. Because the layer-4 switching is based on the hardware chip, the performance is very good, especially for the network transmission speed and switching speed far exceeds the normal packet forwarding speed. However, because it is implemented by hardware, it is not flexible enough to handle load balancing of several of the most standard application protocols, such as HTTP. Currently, Server Load balancer is mainly used to solve the problem of insufficient server processing capability. Therefore, the advantages of high network bandwidth brought by the switch cannot be fully utilized.

■ Internal protocol support

In addition to these three load balancing methods, some protocols support functions related to Server Load balancer, such as the redirection capability in HTTP, and HTTP runs on the top layer of TCP connections. The client directly connects to the server through the TCP Service of port 80, and then sends an HTTP request to the server through the TCP connection. Before the server differentiates the webpage and resources required by the client, it must perform at least four TCP packet exchange requests. The server Load balancer device must allocate incoming requests to multiple servers. Therefore, the Server Load balancer device can only establish a TCP connection and determine how to balance the load after the HTTP request passes. When a website hits hundreds or even thousands of times per second, TCP connections, HTTP header information, and process latency have become very important. There is a lot of useful information for load balancing in HTTP requests and headers. The first and most important thing is that we can obtain the URL and webpage requested by the client from the information and use this information, the server Load balancer device can direct all image requests to an image server, or call the CGI program based on the URL database query content to direct requests to a dedicated high-performance database server. The only factor limiting the information acquisition is the flexibility of the Server Load balancer device. In fact, if the network administrator is familiar with the web content exchange technology, he can use the web content exchange technology to improve services for specific customers based on the cookie field in the HTTP header, if you can find some rules in the HTTP request, you can also make full use of it to make various decisions. In addition to the problem of TCP connection tables, how to find appropriate HTTP header information and make load balancing decisions is an important issue affecting the technical performance of web content exchange.

However, it depends on specific protocols, so the scope of use is limited. Based on the existing Server Load balancer technology and the application of optimized balancing policies, the backend server load balancing is optimal.

 

Address: http://blog.csdn.net/diy8187/archive/2008/06/27/2591823.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.