Optimize LAN by controlling network traffic

Source: Internet
Author: User


Network traffic control is a very important part of LAN maintenance. It mainly involves the following methods: 1. Divide the network segments into different network segments by mask, which can reduce network broadcast, it can achieve fast communication between neighboring sites. 2. The first advantage of establishing a Vlan is that it can effectively curb broadcasting and group broadcasting within the Organization and manage bandwidth and performance across campuses. If you do not manage (or restrict) the entire scope of these working groups, each site must broadcast and query the MAC address to reach the destination before sending data packets. At the same time, because a large number of application layer software needs to broadcast and transmit certain data packets, these data broadcast packets only need to be sent to a group of users. Without VLAN, these data packets will soon occupy a large amount of resources in the entire network, this makes normal data packets unable to obtain normal bandwidth, seriously affecting network efficiency and performance. VLAN is an effective technology for controlling broadcast sending. Its adoption can reduce the impact on end-user sites, network servers, and the important part of the backplane used to process key task data. The second advantage of VLAN is that the management tasks caused by network changes are greatly reduced, that is, the administrator can reduce the workload of Adding users to the entire network and Changing users' physical locations, this change is especially important when users need network operations for multiple purposes, especially when multiple network servers or multiple network operating systems. Iii. Server Load balancer technology full introduction to Server Load balancer (I) the Internet size doubles every one hundred days, the customer hopes to obtain uninterrupted availability and rapid system response time within 7 days and does not want to repeatedly see a site "Server Too Busy" and frequent system faults. The core components of www.2cto.com increase as the business volume increases, the traffic volume and data traffic increase rapidly, and the processing capability and computing strength also increase accordingly, making it impossible for a single device to take over. In this case, if you discard the existing device to perform a large number of hardware upgrades, this will cause a waste of existing resources, and if you face the next increase in business volume, this will lead to a high cost investment for another hardware upgrade, and even devices with superior performance cannot meet the current business needs. As a result, the load balancing mechanism came into being. Load Balancing is built on the existing network structure, it provides a cheap, effective, and transparent method to expand the bandwidth of network devices and servers, increase throughput, enhance network data processing capabilities, and improve network flexibility and availability. Server Load balancer has two meanings: first, a large amount of concurrent access or data traffic is distributed to multiple node devices for separate processing, reducing the user's waiting for response time. Second, the operation of a single heavy load is distributed to multiple node devices for parallel processing. After each node device finishes processing, the results are summarized and returned to the user, which greatly improves the system processing capability. The server Load balancer technology introduced in this article mainly refers to the application of traffic load between all servers and applications in the Server Load balancer cluster, currently, most Server Load balancer technologies are used to improve the availability and scalability of Internet server programs on Web servers, FTP servers, and other key task servers. Server Load balancer technology classification currently there are many different Server Load balancer technologies to meet different application requirements. The following describes the device objects used by Server Load balancer and the network layers of applications (refer to the OSI reference model) and the geographical structure of the application. Software/hardware Load balancing software Load Balancing solution is to install one or more additional software on the operating system of one or more servers to achieve Load balancing, such as DNS Load Balance, checkPoint Firewall-1 ConnectControl, which is based on a specific environment, simple configuration, flexible use, low cost, can meet the general load balancing needs. Software solutions have many disadvantages, because the installation of additional software on each server consumes a certain amount of resources. The more powerful the module, the more it consumes, therefore, when the connection request is very large, the software itself becomes a key to the success or failure of the server. The software scalability is not very good and restricted by the operating system. Due to the Bug of the operating system, security issues are often caused. The hardware Server Load balancer solution directly installs Server Load balancer devices between servers and external networks. This type of device is usually called Server Load balancer. dedicated devices perform specialized tasks and are independent of the operating system, the overall performance has been greatly improved, coupled with a variety of Load Balancing policies, intelligent traffic management, to achieve the best load balancing needs. Server Load balancer has a variety of forms. Apart from being an independent Server Load balancer, some server load balancers are integrated into switching devices and placed between servers and Internet connections, in some cases, two network adapters are used to integrate this function into a PC, one is connected to the Internet, and the other is connected to the internal network of the backend server group. In general, Hardware load balancing is superior to software in terms of functions and performance, but it is expensive. Www.2cto.com Local/Global Load Balancing Server Load balancer is divided into Local Load Balance and Global Load Balance based on the geographical structure of its applications ), local Server Load balancer is used to balance the load of local server clusters. Global Server Load balancer is used to balance the load of server clusters placed in different geographical locations and with different network structures. Local Server Load balancer can effectively solve the problem of excessive data traffic and heavy network load, and purchase servers with superior performance without expensive costs, making full use of existing equipment, avoid data traffic loss caused by server spof. It has flexible and diverse balancing policies to rationally allocate data traffic to servers in the server group to share the burden. To expand and upgrade existing servers, simply add a new server to the service group without changing the existing network structure or stopping existing services. Global load balancing is mainly used for websites with their own servers in multiple regions. In order to allow global users to access the server closest to their own server with only one IP address or domain name, in this way, the fastest access speed can be obtained. It can also be used by large companies with scattered sites in their subsidiaries to achieve unified and reasonable resource allocation through Intranet (the internal Internet of enterprises. Global Server Load balancer has the following features: 1. It achieves geographic location independence and can provide users with completely transparent services remotely. 2. In addition to preventing single-point failures of servers and data centers, it can also avoid single-point failures caused by ISP leased line faults. 3. Solve network congestion problems, improve server response speed, and provide nearby services to achieve better access quality. Server Load balancer at the network level targets different bottlenecks of heavy loads on the network. Starting from different network levels, we can use the corresponding Server Load balancer technology to solve existing problems. As bandwidth increases and data traffic increases, data interfaces in the core part of the Network will face bottlenecks, and the original single line will be difficult to meet requirements, in addition, line upgrades are too expensive or even difficult to implement. In this case, we can consider using the Trunking technology. Link aggregation technology (Layer 2 load balancing) uses multiple physical links as a single aggregation Logical Link. network data traffic is shared by all physical links in the aggregation Logical Link, this increases the capacity of the link logically so that it can meet the demand for increased bandwidth. The modern Server Load balancer technology at www.2cto.com usually operates on Layer 4 or Layer 7 of the network. Layer-4 Server Load balancer maps a valid IP Address registered on the Internet to multiple IP addresses of internal servers. It dynamically uses one of the internal IP addresses for each TCP connection request to achieve load balancing. In layer-4 vswitches, this balanced technology is widely used. A destination address is the packet sent by the server group VIP (Virtual IP address) to connect to the vswitch, based on the source and destination IP addresses, TCP or UDP port numbers, and a certain load balancing policy, the vswitch maps the Server IP address and VIP address, and selects the best server in the server group to process connection requests. Layer 7 Server Load balancer controls the content of application-layer services and provides a high-level access traffic control method, which is suitable for HTTP Server clusters. Layer-7 Server Load balancer performs Load Balancing tasks by checking the HTTP header and based on the information in the header. Layer-7 Server Load balancer has the following advantages: by checking the HTTP header, you can detect error messages of the HTTP 500, 600, and series, therefore, connection requests can be transparently redirected to another server to avoid application layer faults. Data traffic can be directed to the server of the corresponding Content Based on the Data Type that flows through (such as determining that the data packet is an image file, a compressed file, or a multimedia file format) to improve system performance. Based on the connection request type, such as static document requests such as plain text and images, or dynamic document requests such as asp and cgi, the corresponding requests can be directed to the corresponding server for processing, improve system performance and security. Layer-7 Server Load balancer is limited by the protocols it supports (generally only HTTP), which limits its wide application scope and checks the HTTP header to occupy a large amount of system resources, the performance of the system is bound to be affected. In the case of a large number of connection requests, the Server Load balancer device itself may easily become the bottleneck of the overall network performance. In actual application, we may not just want to evenly distribute client service requests to internal servers, regardless of whether the server is down or not. Instead, we want the Pentium III server to accept more service requests than the Pentium II server. One server that processes fewer service requests can allocate more service requests, the faulty server will no longer accept service requests until the fault recovers. Select an appropriate Server Load balancer policy so that multiple devices can complete the task together, eliminating or avoiding the bottleneck of unbalanced network load distribution and long response time of data traffic congestion. For different Server Load balancer modes, Layer 2, Layer 3, Layer 4, and Layer 7 Server Load balancer of OSI reference models have corresponding Server Load balancer policies. The advantages and disadvantages of the Server Load balancer policy www.2cto.com and the difficulty of its implementation are two key factors: 1. Server Load balancer algorithm; 2. network system status detection methods and capabilities. Considering the different types of service requests, different processing capabilities of servers, and uneven load distribution caused by random selection, In order to rationally allocate the load to multiple internal servers, you need to apply the appropriate load balancing algorithms that correctly reflect the processing capabilities and network status of each server: 1. Round Robin ): each request from the network is distributed to the internal server in turn, starting from 1 to N and then restarting. This balancing algorithm is suitable for the situation where all servers in the server group have the same hardware and software configurations and the average service requests are relatively balanced. 2. Weighted Round Robin (Weighted Round Robin): assign different weights to each server based on the server's different processing capabilities, so that the server can accept service requests with corresponding weights. For example, if the weight of server A is set to 1, the weight of server B is 3, and the weight of server C is 6, server A, server B, and server C will receive 10%, 30%, and 60% service requests respectively. This balancing algorithm ensures higher utilization of high-performance servers and avoids overload of Low-performance servers. 3. Random balancing: Randomly allocates requests from the network to multiple internal servers. 4. Weighted Random balancing (Weighted Random): This balancing algorithm is similar to the Weighted round robin algorithm, but it is a Random selection process when processing request sharing. 5. Response Time: the Server Load balancer device sends a detection request (such as Ping) to each internal server ), then, the server determines which server to respond to the client's service request based on the shortest response time of the internal server to the probe request. This balancing algorithm can better reflect the current running status of the server, but the fastest response time only refers to the fastest response time between the Server Load balancer device and the server, instead of the fastest response time between the client and the server. 6. Least connections (Least Connection): the time for each request service on the client to stay on the server may vary greatly, if a simple round robin or random balancing algorithm is used, the connection processes on each server may be significantly different, and the load balancing is not achieved. The least connections balancer algorithm records the number of connections that are being processed by the server. When there is a new service connection request, the current request will be allocated to the server with the least connections, so that the Server Load balancer is more in line with the actual situation and the load is more balanced. This balanced algorithm is suitable for long-time request services, such as FTP. Www.2cto.com 7. Balanced processing capability: This balancing algorithm distributes service requests to the internal processing load (based on the server CPU model, number of CPUs, memory size, and current number of connections) the lightest server. Considering the processing capability of the internal server and the current network running status, this balancing algorithm is more accurate, especially suitable for Layer 7 (Application Layer) in the case of Server Load balancer. 8. DNS response balancing (Flash DNS): On the Internet, whether it is HTTP, FTP or other service requests, the client generally finds the exact IP address of the server through domain name resolution. In this balancing algorithm, Server Load balancer devices in different geographic locations receive domain name resolution requests from the same client, resolve the domain name to the IP address of the corresponding server (that is, the IP address of the server in the same geographical location as the Server Load balancer device) at the same time and return it to the client, the client will continue to request the service with the domain name resolution IP address received first, and ignore the response from other IP addresses. When a Server Load balancer policy is applicable to global load balancing, it is meaningless for local load balancing. Although multiple Server Load balancer algorithms can better distribute data traffic to servers for load balancing, if the server Load balancer policy does not detect the network system status, if a fault occurs between a server or a Server Load balancer device and the server network, the Server Load balancer device still directs some data traffic to the server, this will inevitably cause a large number of service requests to be lost and will not meet the uninterrupted availability requirements. Therefore, good load balancing policies should be able to detect network faults, server system faults, and Application Service faults: 1. Ping Detection: ping detection is used to detect server and network system conditions, this method is simple and fast, but it can only roughly detect whether the operating system on the network and server is normal, and the Application Service detection on the server is powerless. 2. TCP Open Detection: each service opens a TCP connection to detect a TCP port on the server (for example, Telnet port 23 and HTTP port 80) whether the service is enabled to determine whether the service is normal. 3. http url Detection: for example, an access request to the main.html file is sent to the httpserver. If an error message is received, the server is considered to be faulty. In addition to the two factors mentioned above, the advantages and disadvantages of the Server Load balancer policy are also affected. In some cases, we need to allocate all requests from the same client to the same server, for example, when the server stores the client registration, shopping, and other service request information in a local database, it is vital to allocate the client's subrequests to the same server for processing. There are two ways to solve this problem. One is to allocate multiple requests from the same client to the same server based on the IP address, the corresponding information of the Client IP address and server is stored on the server Load balancer device. Second, the unique identifier is used in the client browser cookie to allocate multiple requests to the same server for processing, suitable for clients accessing the Internet through proxy servers. There is also an Out-of-Path Return mode. When a client connection request is sent to a server Load balancer device, the central server Load balancer device directs the request to a server, the server's response request is no longer returned to the central server Load balancer device, that is, bypassing the traffic distributor and directly returning it to the client. Therefore, the central server Load balancer device is only responsible for receiving and forwarding requests, the network load is much reduced, and the client provides a faster response time. This mode is generally used for HTTP Server clusters. A virtual network adapter is installed on each server and its IP address is set as the VIP of the server group, in this way, three handshakes can be successfully achieved when the server directly responds to the client request. The factors for implementing Server Load balancer www.2cto.com should be considered at the initial stage of website construction. However, sometimes the explosive growth of access traffic exceeds the expectation of decision makers, this becomes a problem that must be faced. When we introduce a Server Load balancer solution or even implementation, like many other solutions, we first determine the current and future application requirements, and then weigh the costs and results. Based on the current and future application requirements and the analysis of network bottlenecks, we need to determine which type of Server Load balancer technology is used and what type of balancing strategy is used, what are the requirements for availability, compatibility, and security. Whether the Server Load balancer solution is implemented by using layer-4 switches, Server Load balancer, and other hardware modes with higher performance and functions, or other different types of balancing technology, the following are the problems we may consider when introducing the balancing solution: 1. performance: performance is the most difficult issue to grasp when we introduce a balanced solution. When balancing performance, the number of packets passing through the network per second can be used as a parameter, and the other parameter is the maximum number of concurrent connections that the server group can process in the balancing solution. However, assuming that a balanced system can process millions of concurrent connections, but it can only forward at the rate of 2 packets per second, this obviously has no effect. The advantages and disadvantages of performance are closely related to the processing capability of the Server Load balancer device and the adopted balancing policy. There are two points to note: 1. the overall performance of the Server Load balancer solution to the server cluster, this is the key to responding to client connection requests. 2. The performance of the Server Load balancer device prevents service bottlenecks due to insufficient performance when a large number of connection requests exist. Sometimes we can also consider using a hybrid load balancing policy to improve the overall performance of the server group, such as the combination of DNS load balancing and NAT load balancing. In addition, for websites with a large number of static file requests, you can also consider using the high-speed cache technology to save costs and improve response performance; for sites with a large number of ssl/xml content transmission, the ssl/xml acceleration technology should be considered. 2. Scalability: IT technology is changing with each passing day. The latest product a year ago may now be the product with the lowest performance in the network. With the rapid increase in business volume, the network a year ago needs a new round of expansion. Appropriate balancing solutions should be able to meet these needs and balance the load between different operating systems and hardware platforms, load Balancing of different servers such as HTTP, mail, news, proxy, database, firewall, and Cache, and dynamic addition or deletion of some resources in a completely transparent manner to the client. 3. Flexibility: balanced solutions should be able to flexibly provide different application requirements to meet the changing application requirements. When different Server clusters have different application requirements, a wide range of balanced policies should be provided. Www.2cto.com 4. Reliability: In websites with high service quality requirements, the Server Load balancer solution should provide full fault tolerance and high availability for the server group. However, when the server Load balancer device fails, there should be good redundancy solutions to improve reliability. When redundancy is used, multiple Server Load balancer devices in the same redundancy unit must have an effective way to monitor each other and protect the system from losses caused by major faults as much as possible. 5. Easy to manage: whether it is a balance solution through software or hardware, we all hope that it can be managed in a flexible, intuitive, and secure manner, which facilitates installation, configuration, maintenance, and monitoring, improve work efficiency and avoid errors. Currently, there are three management methods available for hardware Load Balancing Devices: 1. Command Line Interface (CLI: Command Line Interface ), you can use a Super Terminal to connect to the serial interface of the Server Load balancer device for management, or telnet remote logon management. The former is often used during configuration initialization. 2. graphical user interface (GUI: graphical User Interfaces) provides management based on common web pages and Java Applet for security management. Generally, the Management Terminal must install a certain version of browsers; iii. SNMP (Simple Network Management Protocol) support, and manage devices that comply with SNMP standards through third-party Network Management software.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.