Using network address translation for multi-server load Balancing _php tutorial

Source: Internet
Author: User
Tags server memory
Absrtact: This paper discusses the load balancing technology used in distributed network server and the strategy of load distribution, and realizes the load balance gateway on FreeBSD based on network address translation, which is applied to our Internet network server and distributes the load to multiple servers. To address the high load on CPU or I/O caused by large amounts of concurrent access to Internet servers. To achieve optimal load balancing, the load controller needs to allocate the load based on the current CPU and I/O status of each server, which requires dynamic monitoring of the server's load and the use of an optimized load-allocation strategy to achieve an average load distribution.

Keywords: Load balancer, network address translation, FreeBSD


1. Introduction

The rapid growth of the internet has led to a rapid increase in the number of access to multimedia network servers, the ability of servers to provide a large number of concurrent access services, and the processing and I/O capabilities of servers as a bottleneck for service delivery. Because the performance of a single server is always limited, multi-server and load-balancing techniques must be used to meet the needs of a large number of concurrent accesses.

The first load-balancing technology is implemented through DNS, in DNS for multiple addresses configured with the same name, so the client queried the name will get one of the addresses, so that different clients access to different servers, to achieve load balancing purposes [1]. DNS load balancing is a simple and efficient method, but it does not differentiate between server differences, nor does it reflect the current state of the server's operation.

The reverse proxy server can forward the request to the internal Web server, and if the proxy server is able to forward the request evenly to multiple internal servers, it can achieve the purpose of load balancing [2]. An optimized load balancing strategy can be applied in reverse proxy mode, providing services every time the most idle internal server is accessed. However, as the number of concurrent connections increases, the load on the proxy server itself becomes very large, and the final reverse proxy server itself becomes a bottleneck for the service.

An address translation gateway that supports load balancing can map an external IP address to multiple internal IP addresses, dynamically using one of the internal addresses for each TCP connection request, to achieve load balancing purposes [3]. Many hardware vendors integrate this technology into their switches as a function of their fourth exchange, typically using random selection, load balancing policies based on the number of connections or response times of the server to distribute the load. However, hardware-implemented load controller flexibility is not strong, can not support a more optimized load balancing strategy and more complex application protocols.

In addition to these three load-balancing methods, some protocols internally support load-balancing-related features, such as redirection capabilities in the HTTP protocol, but it relies on specific protocols, so the scope of use is limited. According to the existing load balancing technology, we choose the method of using software to realize the load balance of network address translation to compensate the inflexible of the hardware load balancer, and apply the optimized equalization strategy to realize the optimal state of load sharing in the backend server.


2. Load Balancing Strategy

In order to distribute the load evenly to multiple servers on the inside, a certain load balancing policy needs to be applied. The traditional load balancing strategy does not take into account the different types of service requests, the different capabilities of the backend servers, and the uneven load distribution caused by random selection. In order to make the load distribution very uniform, it is necessary to apply a load balancing strategy that correctly reflects the CPU and I/O status of each server [4].

Customer-initiated service request types are diverse and can be easily divided into two different categories to apply different processing strategies according to the resource requirements for processor, network, and I/O:



Static document requests: such as normal text, images and other static multimedia data, they have little impact on processor load, resulting in the disk I/O load and the size of the document is proportional to the main pressure on network I/O.


Dynamic document requests: more common requests often require server pre-processing, such as searching for a database, compressing and decompressing multimedia files, and so on, which require considerable processor and disk I/O resources.


For static documents, each service process consumes approximately the same system resources, so the number of processes can be used to represent the system load. Dynamic Document services require additional processing, which consumes more system resources than processing static requests, and therefore needs to be represented by a weight. One of the simplest server load expression formulas is:

where L is the server load, NS is the number of static document service processes, ND is the number of dynamic document service processes, and a for each dynamic document service relative to the static document service weight, you can choose between 10 to 100.

In this formula does not consider the server hardware limitations, when the hardware constraints, due to resource constraints, the server load will increase significantly. For example, due to the size of the server memory, some processes will be swapped to the hard disk, resulting in a rapid increase in system load. Considering the system hardware limitations, the load on the server can be expressed as:

The newly added parameter ll represents the limit of the server's normal load, which is set according to the hardware capabilities of each server itself. While b means that the weights that are used to limit the tasks assigned to the server when the normal load is exceeded, should be set to a value greater than ll to indicate the hardware throttling effect. Usually in a server cluster, the worse the hardware settings of the server, the larger the weight to avoid when all the servers are overloaded, the worst hardware server is the highest load. So B is inversely proportional to the hardware limit of this server, then B can be set to:

Llmax the LL value of the server configured for the highest hardware in the server cluster. When the load on each server is determined, the server that the hub controls the load distribution can distribute the load correctly to the most idle servers, so that the load is not evenly distributed like other load-sharing policies.


3. Implementation methods and experimental results

Our server system consists of several FreeBSD systems connected using Fast Ethernet. Each back-end server runs a daemon to dynamically gain its own load state, while the central control gateway implemented with FreeBSD refreshes the load on each server through these daemons for proper load allocation.

3.1 Support for load-balanced gateways

Under the FreeBSD system, the divert interface is provided to support network address translation capabilities. The IP packet is sent to the divert interface through the IPFW filtering function of the system core so that the external daemon natd can receive the original packet and then send it back to the system kernel for normal IP distribution [5].

Therefore, according to the FreeBSD address conversion structure, you can create your own network address translation daemon to support load balancing function, so that the FreeBSD system can be used as a support load-balanced gateway. Because it is a software implementation approach, it is easy to support non-standard protocols and apply optimized load balancing strategies with great flexibility.

3.2 Experiment and analysis

To test the availability of this implementation, we conducted our test experiments against the most common HTTP protocols. To differentiate different request types, three different types of tests were designed to test different aspects of performance.



A dynamic document generated by a CGI program: a load-balancing state used to test the processing power of the server.


Small static documents: Use static documents of small size to test the state of load balancing under frequent connections;


Large static documents: Use large documents to test the load balancing status of disk and network I/O;


The test results are based on the performance of a single server completing requests per second, showing the ratio of requests per second to the number of base requests that have been completed with multiple servers for load balancing.



Figure 1: Load Balancing Performance



The first curve A is to handle the dynamic document request, at this time with the increase in the number of servers, its performance is multiplied, and the second curve B for small-size static document requests, when using three servers, performance improvement is not obvious , while the third curve C, which handles large-scale static document requests, has little performance change. To find out why the load-balancing system is not ideal, we examined the utilization of server resources:

Table 1. Utilization of server resources


Processing type
Load Balancer Gateway
Server 1
Server 2
Server 3

A
53%
97%
85D
98%

B
76%
43%
39%
41%

C
94%
32%
31%
35%




As you can see from this table, when working with dynamic document A, three servers are running at full speed and the load is evenly distributed, which is an ideal state. While working with static document types B and C, the load is evenly distributed to three servers, but each server is not running at full speed. Especially when dealing with large documents, the NATD process in a load-balancing device occupies most of the processing resources. Since all network traffic is transformed by it, the load on the NATD process increases when the amount of network traffic and concurrent connections is quite large. When using a different number of back-end servers in the experiment, the actual network bandwidth flowing through the load Balancer gateway is:

Table 2: Bandwidth of server clusters when providing large document sizes


Number of servers
1 units
2 units
3 units

Network speed (KB/S)
10042.14
11015.10
11442.67




It can be seen that the bandwidth limit is around 10mb/s, obviously this is the bandwidth limit of the load balancing process used by this test, in fact, the program uses the list to maintain the state of network address translation, which greatly restricts its network performance, by improving the hardware performance and improving the algorithm, it can further improve its performance.


4. Discussion

From the above experiment, it can be seen that the load balancer based on network address translation can effectively solve the CPU and disk I/O load of server side, however, the performance of the load balancer itself is limited by network I/O and has certain bandwidth limitation under certain hardware conditions. However, this bandwidth limit can be increased by improving the algorithm and improving the hardware performance of the Load Balancer program. It can also be seen that different service types occupy different server resources, and the load measurement policy we use is evaluated using the same load, which is appropriate for most conditions, but the best approach is to monitor the server load separately for different resources, such as CPU, disk I/O, or network I/O, etc. The most appropriate server is selected by the central controller to distribute customer requests. Our future work will start with these two aspects and perfect this load balancer controller.


Reference documents:


[1] E.kata,m.butler, and R. McGrath. A Scalable HTTP server:the NCSA prototype. Computer Networks and ISDN systems. 1994. Vol, p155-164

[2] Ralf S.engelschall. Load balancing Your Web Site. Web techniques Magazine (http://www.WebTechniques.com), May 1998. Vol.3, Iss.5

[3] CICSO. LocalDirector Documents. Http://www.cisco.com, 1997

[4] H.zhu. T.yang, Q.zheng, D.watson, O.h.ibarra, Andt.smith, Adaptive load sharing for clustered digital library servers. Technical report, CS, UCSB, 1998.

[5] FreeBSD core team. NATD and divert manual pages. Http://www.freebsd.org. 1995



Implement A load balancing gateway by NAT


Wang, Bo

Nongye Road, Zhengzhou, 450002, P.R.China

wb@email.online.ha.cn


Abstract:this paper investigates load balancing techniques and strategies, and implements a load balancing gateway based NAT for our Internet servers. The Internet servers involve the high load of CPU and I/O by simultaneous access requests, the symmetrical clustered serve RS can distribute the server load to solve the problem. To balance the load on the best, the gateway distributes the load according to the status of the server's CPU and I/O. Gateway must monitor every server ' s load and apply the best scheme to delivery every request, so it can provide the high Performance for Internet services.

Keywords:load balancing, NAT, FreeBSD

http://www.bkjia.com/PHPjc/316165.html www.bkjia.com true http://www.bkjia.com/PHPjc/316165.html techarticle absrtact: This paper discusses the load balancing technology used in distributed network server and the strategy of load distribution, and realizes the load Balancer gateway on FreeBSD based on network address translation .

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.