Use Network address translation for multi-server load balancing

Source: Internet
Author: User
Abstract: This article discusses the server load balancer technology and load allocation strategies used by distributed network servers, and implements the server load balancer Gateway on FreeBSD based on network address translation, it is applied to our Internet server and distributes the load to multiple servers to solve the high load of CPU or IO caused by a large number of concurrent accesses to the Internet server. To achieve load balancing

Abstract: This article discusses the server load balancer technology and load allocation strategies used by distributed network servers, and implements the server load balancer Gateway on FreeBSD based on network address translation, it is applied to our Internet server and distributes the load to multiple servers to solve the high load problem of CPU or I/O caused by a large number of concurrent accesses to the Internet server. To achieve the best load balancing effect, the load controller needs to allocate the load according to the current CPU and I/O status of each server. This requires dynamic monitoring of the server load, and apply the optimized load distribution policy to achieve the purpose of evenly distributing the load.

Keywords: server load balancer, network address translation, FreeBSD


1. Introduction

The rapid growth of the Internet allows the multimedia network server to rapidly increase the number of accesses, and the server must be able to provide a large number of concurrent access services, server processing and I/O capabilities have become bottlenecks in service provision. Because the performance of a single server is always limited, multi-server and load balancing technologies must be used to meet the needs of a large number of concurrent accesses.

The earliest load balancing technology was implemented through DNS. in DNS, the same name is configured for multiple addresses. Therefore, the client that queries this name will get an address, in this way, different customers can access different servers to achieve load balancing [1]. DNS server load balancer is a simple and effective method, but it cannot distinguish between servers or reflect the current running status of servers.

The reverse proxy server can forward requests to internal Web servers. if the proxy server can evenly forward requests to multiple internal servers, the server load balancer can be achieved [2]. In reverse proxy mode, you can apply the optimized load balancing policy to access the idle internal server each time to provide services. However, as the number of concurrent connections increases, the load on the proxy server itself becomes very large, and the reverse proxy server itself becomes a service bottleneck.

In the address translation gateway that supports server load balancer, an external IP address can be mapped to multiple internal IP addresses. each TCP connection request dynamically uses one of the internal IP addresses to achieve the purpose of server load balancer [3]. Many hardware vendors integrate this technology into their vswitches as a function of layer-4 switching, generally, the server load balancer policy is randomly selected and assigned based on the server connection quantity or response time. However, the hardware-implemented server load balancer is not flexible enough to support more optimized server load balancer policies and more complex application protocols.

In addition to the three server load balancer methods, some protocols support functions related to server load balancer, such as the redirection capability in HTTP, but they depend on specific protocols, so the scope of use is limited. Based on the existing server load balancer technology, we chose the server load balancer method that implements network address translation using software to make up for the inflexible hardware load balancer, the optimized balancing policy is applied to achieve the optimal load balancing of the backend servers.


2. load balancing policy

In order to evenly distribute the load to multiple internal servers, a certain load balancing policy must be applied. Traditional load balancing policies do not consider different types of service requests, different capabilities of backend servers, and uneven load distribution caused by random selection. In order to make the load distribution very even, it is necessary to apply a load balancing policy that correctly reflects the CPU and I/O status of each server [4].

The types of service requests initiated by a customer are diverse. according to the requirements for CPU, network, and I/O resources, they can be simply divided into two different categories, to apply different processing policies:



Static file requests: for example, common text, images, and other static multimedia data have little impact on the processor load, resulting in a disk I/O load proportional to the file size, network I/O pressure.


Dynamic document requests: more common requests often need to be processed by the server in advance, such as searching databases, compressing and decompressing multimedia files, etc. these requests require a large processor and disk I/O resources.


For static documents, each service process occupies roughly the same system resources, so the number of processes can be used to represent the system load. The dynamic document service requires additional processing, and the system resources occupied by it exceed the processing of static requests. Therefore, a weight is required to represent it. The simplest expression of server load is as follows:

L indicates the server load, Ns indicates the number of static document service processes, and Nd indicates the number of dynamic document service processes. a indicates the weight of each dynamic document service relative to the static document service, you can select between 10 and 100.

In this formula, server hardware restrictions are not taken into account. when the hardware restrictions are reached, server load increases significantly due to resource shortage. For example, due to the limited memory size of the server, some processes will be switched to the hard disk, resulting in a rapid increase in system load. Considering the system hardware restrictions, the server load can be expressed:

The new parameter Ll indicates the limit of the normal load on the server. it must be set based on the hardware capability of each server. B indicates that the weight assigned to the server task is limited when the load exceeds normal. it should be set to a value greater than Ll to indicate the hardware limitation. Generally, in a server cluster, the server with poor hardware settings requires a higher weight to avoid running when all servers are overloaded, the server with the worst hardware has the highest load. Therefore, B is inversely proportional to the hardware limitation Ll on the server, and B can be set:

Llmax is the Ll value of the server with the highest hardware configuration in the server cluster. After the load of each server is determined, the server that the center controls the load distribution can correctly distribute the load to the idle server, this does not result in uneven load distribution as other load distribution policies do.


3. implementation methods and experiment results

Our server system is composed of multiple FreeBSD systems connected using Fast Ethernet. Each backend server runs a daemon to dynamically obtain its own load status. The central control gateway implemented by FreeBSD refreshes the load of each server through these daemon, for proper load distribution.

3.1 Server load balancer Gateway

In the FreeBSD system, the pert interface is provided to support network address translation. IP data packets are sent to the pert interface through the ipfw filter function of the system kernel, so that the external daemon natd can receive the original data packets, and then send the data packets back to the system kernel for normal IP distribution [5].

Therefore, based on the address translation structure of FreeBSD, you can create your own network address translation daemon to support the server load balancer function, so that the FreeBSD system can be used as a gateway that supports server load balancer. Because it is implemented by software, it is easy to support non-standard protocols and application-optimized load balancing policies, with great flexibility.

3.2 experiment and analysis

To test the availability of this implementation, we conduct our testing on the most common HTTP protocol. To differentiate different request types, three different types of tests were designed to test the performance of different aspects.



CGI program-generated dynamic documentation: used to test the server load balancer status of the server's processing capabilities.


Small static documents: use static documents with small sizes to test the status of server load balancer under frequent connections;


Large static documents: use large documents to test the load balancing status of disks and network I/O;


The test result is based on the performance of requests completed by a single server per second. it shows the ratio of the number of requests completed per second to the number of benchmark requests when multiple servers are used for load balancing.



: Server load balancer performance



The first curve a in the process processes dynamic document requests. as the number of servers increases, its performance doubles. The Second Curve B processes small-size static document requests, when three servers are used, the performance improvement is not obvious, while the third curve c for processing large-size static document requests has almost no performance change. In order to find out why the server load balancer system cannot reach the ideal state, we have investigated the utilization rate of server resources:

Table 1. server resource utilization


Processing type
Server load balancer Gateway
Server 1
Server 2
Server 3

A
53%
97%
95%
98%

B
76%
43%
39%
41%

C
94%
32%
31%
35%




From this table, we can see that when processing dynamic document a, all three servers are running at full speed, and the load is evenly distributed, which is an ideal state. When processing static document types B and c, the load is evenly distributed to three servers, but each server is not running at full speed. Especially when processing large-sized documents, the natd process in the server load balancer device occupies most of the processing resources. Because all the network traffic needs to be converted, the load of the natd process increases when the network traffic and the number of concurrent connections are large. When a number of backend servers are used in the experiment, the actual network bandwidth that flows through the server load balancer gateway is:

Table 2: server cluster bandwidth when providing large-sized documents


Number of servers
1 Unit
2
3

Network speed (Kb/s)
10042.14
11015.10
11442.67




It can be seen that the bandwidth limit is about 10 Mb/s. obviously this is the bandwidth limit of the server load balancer process used in this Test. In fact, this program uses a linked list to maintain the network address switching status, this greatly limits its network performance. by improving hardware performance and algorithms, it can further improve its performance.


4. discussion

From the experiment above, we can see that the server load balancer based on network address translation can effectively solve the CPU and disk I/O load on the server, however, the performance of the server load balancer is limited by network I/O and bandwidth is limited under certain hardware conditions. However, by improving algorithms and improving the hardware performance of the server load balancer program, to increase the bandwidth limit. We can also see that different service types occupy different server resources. the load measurement strategy we use is to evaluate the same load, which is suitable for most conditions, however, the best way is to monitor the server load for different resources, such as CPU, disk I/O, or network I/O, the central controller selects the most suitable server to distribute customer requests. Our future work will begin from these two aspects to improve the server load balancer controller.


References:


[1] E. Kata, M. Butler, and R. McGrath. A scalable HTTP server: the ncsa prototype. Computer Networks and ISDN systems. 1994. Vol 27, P155-164

[2] Ralf S. Engelschall. Load Balancing Your Web Site. Web Techniques Magazine (http://www.WebTechniques.com), May 1998. vol.3, iss.5

[3] Maid. http://www.cisco.com, 1997

[4] H. zhu. t. yang, Q. zheng, D. watson, O. h. ibarra, andT. smith, Adaptive load sharing for clustered digital library servers. technical Report, CS, UCSB, 1998.

[5] FreeBSD core team. natd and pert manual pages. http://www.freebsd.org. 1995



Implement a load balancing gateway by NAT


Wang, Bo

NongYe Road 70, ZhengZhou, 450002, P.R. China

Wb@email.online.ha.cn


Abstract: This paper investigates load balancing techniques and strategies, and implements a load balancing gateway based NAT for our Internet servers. the Internet servers involve the high load of CPU and I/O by simultaneous access requests, the specified rical clustered servers can distribute the server load to solve the problem. to balance the load in the best way, the gateway distributes the load according to the status of server's CPU and I/O. the gateway must monitor every server's load and apply the best scheme to delivery every request, so it can provide the high performance for Internet services.

Keywords: load balancing, NAT, FreeBSD

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.