Introduction to the content sharing in Marco's linux

Source: Internet
Author: User
This article describes the linux cluster-linux Enterprise Application-Linux server application information. For more information, see the following section. A cluster is actually a collection of computers that provide users with the same service. It is called a cluster. for users, it seems to be a service provided by a computer, clusters are mainly divided into three categories,

LB Load Balancing Cluster
Server Load balancer clusters improve service response capabilities. For example, a server has 100 concurrent responses. At this time, it is often reported that the server cannot be connected, at this time, there are two solutions. One is that upgrading hardware is obviously not a good solution. If we say that after the hardware upgrade, due to the increasing business volume, what can I do if the server cannot afford the load? 2. Combine the existing idle and Low-configuration devices into a highly Concurrent Server Load balancer cluster. multiple computers share the requests of the load users at the same time, as a result, the load on the server is not that great, so this type of cluster has good scalability, reliability, and low cost, etc,

HA high availability cluster
A high-availability cluster provides uninterrupted services. It cannot be said that services cannot be provided because one or more servers are down. If one server is down, it automatically switches to other computers to achieve high availability,

HP High-Performance cluster
High-performance clusters are mainly used in scenarios that require a large number of CPU operations, such as weather forecasts, special effects on foreign 3D movies, and a series of applications that require a large number of computations,

**************************************** **************************************** **********************
The three main categories of cluster applications are described above. The following describes the implementation mechanisms of the three major applications,

LB Load Balancing clusters are divided into hardware and software
Hard parts are expensive,
F5, etc.
There are two common software-level systems:
LVS
Haproxy
HA high-availability cluster solutions are common in the following types:
Heartbeat
Corosync + openais RHCS
Ultramokey
Keepalived

Common solutions for HP high-performance clusters include:
Bowerful
**************************************** **************************************** **********************
The above describes the solutions for various clusters. Generally, Server Load balancer clusters and high-availability clusters are used in combination. The following describes the LVS commonly used in Server Load balancer clusters.
**************************************** **************************************** **********************
LVS (linux virtual server) linux virtual server
Lvs is a free software developed by Chinese Zhang Wenyu. lvs has good scalability, reliability, and manageability. Through linux and lvs, it can achieve a high availability and performance, low-cost server cluster,
Lvs is a three-tier architecture. That is to say, lvs consists of three parts: the frontend Load balancer (LB), the application server group in the middle, and the backend is shared storage. For details, see

Three working models of lvs
NAT Model
The NAT model can be seen by name and is implemented through network address translation. The method of work is that the user requests to reach the front-end Server Load balancer (Director Server ), the Server Load balancer then modifies the target address of the user request to the backend application Server (Real Server) based on the predefined scheduling algorithm. After the application Server processes the request, it returns the result to the user, during this period, the Server Load balancer must pass through the Server Load balancer. The Server Load balancer changes the source address of the message to the target address of the user's request and then forwards it to the user to complete the whole load balancing process,
The NAT model has the following features,
All nodes must be in an IP network.
Only one public address is required.
Support Port ing
Backend application servers are platform-independent
Incoming and outgoing packets must pass through the Load balancer. When the load is too large, the Load balancer will be the bottleneck of the entire cluster.
Up to 8 nodes are supported,
DR Model
The DR model is based on the routing technology to achieve load balancing. Unlike the NAT model, the Load balancer rewrite the MAC address in the user request message to send the request to the Real Server, the Real Server directly responds to users, which greatly reduces the Load balancer pressure. The DR model is also the most widely used one,
The DR model has the following features,
All cluster nodes must be in the same physical network
RIP can be either a public IP or a private IP
The server Load balancer only responds to inbound requests,
Port ing is not supported.
TUN Model
The TUN model is implemented through IP tunneling technology. The TUN model is similar to the DR model. The difference is that the Load balancer (Director Server) and the Application Server (Real Server) the communication mechanism is to forward users' requests to a Real Server through the IP tunneling technology, and the Real Server directly responds to users,
Features of the TUN model,
All cluster nodes can be anywhere
RIP must be a public IP Address
The server Load balancer only responds to inbound requests.
Port ing is not supported.

**************************************** **************************************** **********************
The above describes the LVS working model. The following describes the scheduling algorithms supported by LVS.
The scheduling algorithm can also be called the load balancing method. As mentioned above, the frontend Load balancer (Director Server) will distribute users' requests to the backend application Server (Real Server ), so how does the Load balancer (Director Server) know which Application Server (Real Server) will allocate user requests? It refers to the application Server to which user requests are allocated according to the scheduling algorithm, and up to 10 scheduling algorithms are supported in LVS, the following describes several common scheduling algorithms,
Round robin: the user requests are evenly distributed to the Real Server,
Weighted Round Robin (WRR) allows you to set different weights for Real servers. Real servers with good performance can have higher weights, the Real Server with poor performance can be set to a lower value, which makes full use of Server resources,
The minimum connection schedule dynamically distributes user requests to the Real Server with the least established connections,
Weighted Least connections: weighted least connections can set the full value of servers with good performance to a higher value, while the server weights with poor performance are lower,
In addition to the above four, there are also target address hash Algorithm Scheduling, source address hash algorithm, minimum expected latency scheduling, basically least connection scheduling, and least connection Scheduling with replication,
**************************************** **************************************** **********************
As mentioned above, the Server Load balancer cluster LVS has a fatal drawback. When a Director Server fails, the entire cluster will be paralyzed, as mentioned above, Server Load balancer clusters must be used in combination with highly available clusters. In this way, the failure of a Director Server can be avoided, resulting in system breakdown of the entire cluster,
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.