Summary of the structure and Scheduling Algorithms of Linux virtual servers

Source: Internet
Author: User
Tags domain name server
Article Title: A Summary of the structure and Scheduling Algorithms of Linux virtual servers. Linux is a technology channel of the IT lab in China. Includes basic categories such as desktop applications, Linux system management, kernel research, embedded systems, and open source.
Linux Virtual Server (LVS) is built on an actual Server cluster. You cannot see multiple servers that provide services, but only one Server that serves as a Load balancer. The actual server is connected through a high-speed LAN or geographically dispersed Wan. The front-end of the actual Server is a Load balancer that schedules user requests to the actual server. It seems that all services are completed through virtual servers. Linux virtual servers provide good scalability, reliability, and availability. Users can transparently add or remove one node and monitor the actual server. If any node fails, reconfigure the system.
The structure 1 of Linux Virtual Server is shown in.


Linux virtual server structure
   Scheduling Algorithm
LVS provides four scheduling algorithms: rotation scheduling, weighted rotation scheduling, least connection scheduling, and weighted least connection scheduling.
Round Robin Scheduling)
Rotation scheduling does not consider the number of connections and response time of the server. It treats all servers as the same. Connections are distributed to different servers in the form of rotation.
Weighted Round Robin Scheduling (Weighted Round Robin Scheduling)
Each machine is assigned a corresponding weight based on the processing capacity of each machine, and requests are distributed to each machine in rotation based on the weight. This scheduling algorithm consumes less than other dynamic scheduling algorithms, but when the load changes frequently, it will lead to load imbalance, and those long requests will be sent to the same server.
Least Connection Scheduling)
The least connection schedule sends user requests to machines with the least number of connections. Least connection scheduling is a dynamic scheduling method. If the processing capability of each server in the cluster is similar, load imbalance will not be caused when the load changes greatly, because it does not send long requests to the same machine. However, when the processing capability of the processor is significantly different, the minimum connection scheduling will not be able to play well.
Weighted Least Connection Scheduling)
Assign a weight to the server based on the performance of the server. The larger the weight, the larger the chance of getting a connection. There are the following calculation methods: (assume there are n machines in total, and the I weight of each server is Wi (I = 1 ,.., n), the number of active connections is Ci (I = 1 ,.., n), all connections are Ci (I = 1 ,.., n). The next connection is sent to server j, which meets the following requirements: (Cj/ALL_CONNECTIONS)/Wj = min {(Ci/ALL_CONNECTIONS) /Wi} (I = 1 ,.., n) Since ALL_CONNECTIONS is a constant, the formula above can be optimized to: Cj/Wj = min {Ci/Wi} (I = 1 ,.., n)
   Load Balancing Method
LVS provides three IP-level Load Balancing Methods: Virtual Server via NAT, Virtual Server via IP Tunneling, and Virtual Server via Direct Routing.
The Virtual Server via NAT method uses the two-way rewrite of packets. The Virtual Server via IP Tunneling adopts the one-way rewrite of packets. The Virtual Server via Direct Routing adopts the packet forwarding policy, these policies will be described in detail in future articles.
MOSIX adds the cluster computing function to the Linux kernel. It supports BSD/OS and Linux operating systems, and allows any number of X86/Pentium-based servers and workstations to work together. In the MOSIX cluster environment, you do not need to modify the application, connect the application to the database, or allocate the application to different nodes for running. MOSIX automatically delivers the work transparently to other nodes for execution.
The core of MOSIX is an adaptive resource management algorithm. It monitors the load of each node and responds accordingly, thus improving the overall performance of all processes. It uses the preemptive Process Migration Method to allocate and reallocate processes on each node to make full use of all resources. Adaptive Resource Management algorithms include Adaptive Load Balancing algorithms, memory guidance algorithms, and file I/O optimization algorithms. These algorithms respond to changes in cluster resource usage. For example, unbalanced load distribution on nodes or excessive disk swap-in/out caused by insufficient memory. In this case, MOSIX migrates a process from one node to another to balance the load or migrate the process to a node with sufficient memory space.
Because MOSIX is implemented in the core of Linux, its operations are completely transparent to applications. You can use it to define different cluster types. The machines in these clusters can be the same or different.
Unlike cluster systems such as Turbocluster, Linux Virtual Server, and Lsf, each node in the MOSIX cluster is a master node and a service node, and no master node exists. For processes created on local nodes, the node is a master node. For those processes migrated from distant nodes, the node is a service node. This means that you can add or delete nodes to or from the cluster at any time without adversely affecting running processes. Another feature of MOSIX is that its monitoring algorithm can monitor the speed, load, available memory, IPC, and I/O rate of each node. The system uses this information to determine which node the process is sent. After a process is created on a node, the process is executed on the node. When the load of the node exceeds the threshold, the process is transparently migrated to another node for execution.
The MOSIX file system uses a direct file system access method, which allows processes that migrate to other nodes to perform I/O operations locally. In this way, the communication between the processes that require I/O operations and the nodes that create the process is reduced, so that these processes can be freely migrated in the nodes in the cluster. The MOSIX file system allows all nodes to transparently access all directories and files on other nodes as they access the local file system.
A low-end MOSIX configuration can contain multiple PCs connected over Ethernet. A large configuration can contain multiple workstations and servers connected through fast Ethernet. High-end MOSIX configurations can include multiple SMP or non-SMP workstations and servers connected through Gigabit-Ethernet.
   5. EDDIE
The main purpose of Eddie is to provide tools for websites that handle important tasks, so that these websites can provide continuous advanced services. Eddie created a real distributed web server structure that supports web servers distributed across different physical locations. Its structure is 5-1.
The distributed server shown in contains two clusters: site 1 and site 2. Each cluster contains one Domain Name Server and several actual servers running web server software. When you type a domain name, first resolve the domain name on the Local DNS to find the corresponding IP address. If the Local DNS cannot resolve the domain name, the domain name is sent to the Authoritative DNS, And the Authoritative DNS returns the IP address of the server to be accessed, then you can access the content on the specified server.


Figure 2 Structure of the Eddie Cluster
Eddie mainly includes two software packages: HTTP gateway and enhanced DNS server. 3. Add a new server (frontend server) to each site ), run the HTTP gateway on it to accept external requests and schedule the requests to the appropriate backend machine for execution. The DNS server runs the enhanced DNS server software, with this software, load balancing can be performed on multiple geographically dispersed websites.

Figure 3
Eddie has the following distinctive features:
Improves the throughput of web servers. By providing powerful load balancing capabilities, Eddie allows users to take full advantage of all the resources in Distributed web servers. Load Balancing is mainly used in two areas. First, the backend server sends the server load information such as CPU load, memory, disk latency, running queue data, and page errors to the front-end server, the front-end server sends external requests to the appropriate server based on the load. In addition, the Server Load balancer function is also provided in the enhanced DNS software. The integrated load information of the front-end server and backend server is sent to the Authoritative DNS server, then, the Local DNS can determine the Authoritative DNS to which the domain name resolution task is sent based on the load of each Authoritative DNS. In this way, all resources in the distributed server environment can be fully utilized to enhance the web server throughput.
Provides excellent service quality. Eddie first improves the web server throughput by Using Static and Dynamic Load Balancing policies and reduces the response time of user requests. Secondly, when a user sends a request, the system checks whether the connection from the user already exists. If yes, the request is sent to the server that provided the service for the previous request, this ensures the continuity of web access. If there are not enough resources, add the user request to the waiting queue and tell the user that the request will be processed after a while.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.