Lan Optimization-detailed introduction to network traffic control

Source: Internet
Author: User

Network traffic control is a very important part of LAN maintenance. It mainly involves the following methods:


1. Use a mask to divide the network segments into different network segments. This reduces the number of network broadcasts and enables fast communication between neighboring sites.
  
2. Create a Vlan
The first advantage is that it can effectively curb the broadcasting and group broadcasting within the Organization and manage the bandwidth and performance across campuses. If you do not manage (or restrict) the entire scope of these working groups, each site must broadcast and query the MAC address to reach the destination before sending data packets. At the same time, because a large number of application layer software needs to broadcast and transmit certain data packets, these data broadcast packets only need to be sent to a group of users. Without VLAN, these data packets will soon occupy a large amount of resources in the entire network, this makes normal data packets unable to obtain normal bandwidth, seriously affecting network efficiency and performance. VLAN is an effective technology for controlling broadcast sending. Its adoption can reduce the impact on end-user sites, network servers, and the important part of the backplane used to process key task data.
  
The second advantage of VLAN is that the management tasks caused by network changes are greatly reduced, that is, the administrator can reduce the workload of Adding users to the entire network and Changing users' physical locations, this change is especially important when users need network operations for multiple purposes, especially when multiple network servers or multiple network operating systems.
Iii. Server Load balancer Technology

Introduction to Server Load balancer (Part 1)

The size of the Internet doubles every one hundred days. The customer wants to achieve uninterrupted availability within 7 days and fast system response time, rather than repeatedly seeing a site "Server Too Busy" and frequent system faults.
  
As the business volume increases, the traffic volume and data traffic increase rapidly, the processing capability and computing strength of each core part of the network also increase, making it impossible for a single device to undertake. In this case, if you discard the existing device to perform a large number of hardware upgrades, this will cause a waste of existing resources, and if you face the next increase in business volume, this will lead to a high cost investment for another hardware upgrade, and even devices with superior performance cannot meet the current business needs. As a result, the load balancing mechanism came into being.
  
Load Balancing is built on the existing network structure, it provides a cheap, effective, and transparent method to expand the bandwidth of network devices and servers, increase throughput, enhance network data processing capabilities, and improve network flexibility and availability.
  
Server Load balancer has two meanings: first, a large amount of concurrent access or data traffic is distributed to multiple node devices for separate processing, reducing the user's waiting for response time. Second, the operation of a single heavy load is distributed to multiple node devices for parallel processing. After each node device finishes processing, the results are summarized and returned to the user, which greatly improves the system processing capability.
  
The server Load balancer technology introduced in this article mainly refers to the application of traffic load between all servers and applications in the Server Load balancer cluster, currently, most Server Load balancer technologies are used to improve the availability and scalability of Internet server programs on Web servers, FTP servers, and other key task servers.

Server Load balancer technical classification
  
Currently, there are many different Server Load balancer technologies to meet different application requirements. The following describes the device objects used by Server Load balancer and the network layers of applications (refer to the OSI reference model) and the geographical structure of the application.

Software/hardware Load Balancing
  
A software Load Balancing solution is to install one or more additional software on the operating system of one or more servers to achieve Load balancing, such as DNS Load Balance, CheckPoint Firewall-1 ConnectControl, etc, it has the advantages of simple configuration, flexible use, and low cost based on a specific environment, and can meet general load balancing needs.
  
Software solutions have many disadvantages, because the installation of additional software on each server consumes a certain amount of resources. The more powerful the module, the more it consumes, therefore, when the connection request is very large, the software itself becomes a key to the success or failure of the server. The software scalability is not very good and restricted by the operating system. Due to the Bug of the operating system, security issues are often caused.
  
The hardware Server Load balancer solution directly installs Server Load balancer devices between servers and external networks. This type of device is usually called Server Load balancer. dedicated devices perform specialized tasks and are independent of the operating system, the overall performance has been greatly improved, coupled with a variety of Load Balancing policies, intelligent traffic management, to achieve the best load balancing needs.
  
Server Load balancer has a variety of forms. Apart from being an independent Server Load balancer, some server load balancers are integrated into switching devices and placed between servers and Internet connections, in some cases, two network adapters are used to integrate this function into a PC, one is connected to the Internet, and the other is connected to the internal network of the backend server group.
In general, Hardware load balancing is superior to software in terms of functions and performance, but it is expensive.


Local/Global Load Balancing

Server Load balancer is divided into Local Load Balance and Global Load Balance based on the geographical structure of its applications ), local Server Load balancer is used to balance the load of local server clusters. Global Server Load balancer is used to balance the load of server clusters placed in different geographical locations and with different network structures.
  
Local Server Load balancer can effectively solve the problem of excessive data traffic and heavy network load, and purchase servers with superior performance without expensive costs, making full use of existing equipment, avoid data traffic loss caused by server spof. It has flexible and diverse balancing policies to rationally allocate data traffic to servers in the server group to share the burden. To expand and upgrade existing servers, simply add a new server to the service group without changing the existing network structure or stopping existing services.
  
Global load balancing is mainly used for websites with their own servers in multiple regions. In order to allow global users to access the server closest to their own server with only one IP address or domain name, in this way, the fastest access speed can be obtained. It can also be used by large companies with scattered sites in their subsidiaries to achieve unified and reasonable resource allocation through Intranet (the internal Internet of enterprises.
  
Global Load Balancing has the following features:

1. achieve geographic location independence and provide users with completely transparent services remotely.

2. In addition to preventing single-point failures of servers and data centers, it can also avoid single-point failures caused by ISP leased line faults.

3. Solve network congestion problems, improve server response speed, and provide nearby services to achieve better access quality.

Load Balancing at the network level
  
To address the different bottlenecks of heavy load on the network, we can start from different network levels and adopt the corresponding load balancing technology to solve the existing problems.
  
As bandwidth increases and data traffic increases, data interfaces in the core part of the Network will face bottlenecks, and the original single line will be difficult to meet requirements, in addition, line upgrades are too expensive or even difficult to implement. In this case, we can consider using the Trunking technology.
  
Link aggregation technology (Layer 2 load balancing) uses multiple physical links as a single aggregation Logical Link. network data traffic is shared by all physical links in the aggregation Logical Link, this increases the capacity of the link logically so that it can meet the demand for increased bandwidth.
  
Modern Server Load balancer technology usually operates on Layer 4 or Layer 7 of the network. Layer-4 Server Load balancer maps a valid IP Address registered on the Internet to multiple IP addresses of internal servers. It dynamically uses one of the internal IP addresses for each TCP connection request to achieve load balancing. In layer-4 vswitches, this balanced technology is widely used. A destination address is the packet sent by the server group VIP (Virtual IP address) to connect to the vswitch, based on the source and destination IP addresses, TCP or UDP port numbers, and a certain load balancing policy, the vswitch maps the Server IP address and VIP address, and selects the best server in the server group to process connection requests.
  
Layer 7 Server Load balancer controls the content of application-layer services and provides a high-level access traffic control method, which is suitable for HTTP Server clusters. Layer-7 Server Load balancer performs Load Balancing tasks by checking the HTTP header and based on the information in the header.
  
The advantages of layer-7 Server Load balancer are as follows:

By checking the HTTP header, you can detect error messages of the HTTP 500, 600, and series. Therefore, you can transparently redirect connection requests to another server to avoid application layer faults.

Data traffic can be directed to the server of the corresponding Content Based on the Data Type that flows through (such as determining that the data packet is an image file, a compressed file, or a multimedia file format) to improve system performance.

Based on the connection request type, such as static document requests such as plain text and images, or dynamic document requests such as asp and cgi, the corresponding requests can be directed to the corresponding server for processing, improve system performance and security.
  
Layer-7 Server Load balancer is limited by the protocols it supports (generally only HTTP), which limits its wide application scope and checks the HTTP header to occupy a large amount of system resources, the performance of the system is bound to be affected. In the case of a large number of connection requests, the Server Load balancer device itself may easily become the bottleneck of the overall network performance.


Load Balancing Policy
  
In practical applications, we may not just want to evenly allocate client service requests to internal servers, regardless of whether the servers are down or not. Instead, we want the Pentium III server to accept more service requests than the Pentium II server. One server that processes fewer service requests can allocate more service requests, the faulty server will no longer accept service requests until the fault recovers.
Select an appropriate Server Load balancer policy so that multiple devices can complete the task together, eliminating or avoiding the bottleneck of unbalanced network load distribution and long response time of data traffic congestion. For different Server Load balancer modes, Layer 2, Layer 3, Layer 4, and Layer 7 Server Load balancer of OSI reference models have corresponding Server Load balancer policies.
  
The advantages and disadvantages of Server Load balancer policies and the difficulty of implementation are two key factors: 1. Server Load balancer algorithms; 2. detection methods and capabilities for network system conditions.
  
Considering the different types of service requests, different processing capabilities of servers, and uneven load distribution caused by random selection, In order to rationally allocate the load to multiple internal servers, you need to apply a Server Load balancer algorithm that correctly reflects the processing capabilities and network status of each server:

1. Round Robin: each request from the network is distributed to an internal server in turn, starting from 1 to N and then restarting. This balancing algorithm is suitable for the situation where all servers in the server group have the same hardware and software configurations and the average service requests are relatively balanced.

2. Weighted Round Robin (Weighted Round Robin): assign different weights to each server based on the server's different processing capabilities, so that the server can accept service requests with corresponding weights. For example, if the weight of server A is set to 1, the weight of server B is 3, and the weight of server C is 6, server A, server B, and server C will receive 10%, 30%, and 60% service requests respectively. This balanced algorithm is feasible.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.