Introduction to Web Server Clusters and Server Load balancer

Source: Internet
Author: User

Cluster)
A cluster is a loosely coupled multi-processor system composed of a group of independent computer systems. It communicates with each other through a network. Applications can transmit messages through the network shared memory to implement distributed computers.
Load Balance)  
Server Load balancer is a dynamic balancing technology. It uses some tools to analyze data packets in real time, master the data traffic conditions in the network, and distribute tasks in a reasonable and balanced manner. Based on the existing network structure, this technology provides a cheap and effective method to expand server bandwidth and increase server throughput. It enhances network data processing capabilities and improves network flexibility and availability.
Features
(1) high reliability (ha ). With the cluster management software, when the master server fails, the backup server can automatically take over the work of the master server and switch over in time to achieve uninterrupted services for users.
(2) high-performance computing (HP ). That is to say, we make full use of the resources of every computer in the cluster to implement parallel processing of complex operations, which are usually used in scientific computing, such as genetic analysis and chemical analysis.
(3) load balancing. That is, the load pressure is allocated to each computer in the cluster according to a certain algorithm to reduce the pressure on the master server and reduce the hardware and software requirements on the master server.
LVS system structure and features
1. Linux virtual server: LVS for short. It was initiated and led by Dr. Zhang Wenyu, a Linux programmer in China. It is a Linux-based server cluster solution, the goal is to create a system with good scalability, high reliability, high performance and high availability. Many commercial cluster products, such as Redhat's piranha and turbo cluster of Turbo Linux, are based on the core code of LVS.
2. architecture: the server cluster system built using LVS is transparent in terms of architecture, and the end user only feels a virtual server. Physical servers can be connected through a high-speed LAN or a WAN distributed across regions. The front-end is the Server Load balancer, which is responsible for distributing various service requests to the subsequent physical servers, so that the entire cluster is like a virtual server serving the same IP address.
3. Operating Principles and advantages and disadvantages of LVS: Linux virtual server is mainly implemented on the server Load balancer. The Server Load balancer is a Linux system with 2.2.x kernel added with LVS patch. LVS patch can be added to the kernel by re-compiling the kernel, or inserted into the current kernel as a dynamic module.
Server Load balancer can run in the following three modes:
(1) virtual server via NAT (VS-NAT): address translation for virtual server. The address converter can have a legitimate IP address accessed by the outside world. It modifies the address of the outbound packet from the VPC. The external package seems to be from the address converter itself. When the external package is sent to the converter, it can determine which node should the package be sent to the Intranet. The advantage is that the IP address is saved and the internal disguise can be performed. The disadvantage is that the efficiency is low because the traffic returned to the requester goes through the converter.
(2) virtual server via IP tunneling (vs-Tun): Implements virtual server using IP tunneling technology. In this way, the forwarding mechanism is available when the cluster node is not in the same network segment, and the IP packet is encapsulated in other network traffic. For the sake of security, VPN or leased line in tunneling technology should be used. The cluster can provide services such as TCP/IP-based Web services, mail services, news services, DNS services, and proxy servers.
(3) virtual server via direct routing (vs-DR): use direct routing technology to implement virtual servers. This method can be used when the computer participating in the cluster and the computer acting as the control and management computer are in the same network segment. When the control and management computer receives the request packet, it is directly sent to the node participating in the cluster. The advantage is that the traffic returned to the customer is not controlled by the host, and the speed is fast and low.
Take four servers as an example to achieve load balancing:
Install and configure LVS
1. Preparations before installation:
(1) first, LVS does not require uniform server specifications in the cluster. On the contrary, you can adjust the Load Allocation Policy Based on Different server configurations and load conditions, make full use of each server in the cluster environment. See the following table:
SRV eth0 eth0: 0 eth1 eth1: 0

VS1 10.0.0.1 10.0.0.2 192.168.10.1 192.168.10.254

Vsbak 10.0.0.3 192.168.10.102

Real1 192.168.10.100

Real2 192.168.10.101

10.0.0.2 is the IP address that allows the user to access.
(2) Among the four servers, VS1 acts as a virtual server (namely, a server Load balancer) and forwards user access requests to real1 and real2 in the cluster, real2 processes them separately. The client is the client testing machine and can be any operating system.
(3) All operating systems are redhat6.2. The core of VS1 and vsbak is 2.2.19, and the patch is ipvs. All Real Server subnet masks are 24-bit, and VS1 and vsbak are 10.0.0. the network segment is 24 bits.
2. Understand related terms in LVS
(1) ipvsadm: ipvsadm is a user interface of LVS. Compile and install ipvsadm on the server Load balancer.
(2) Scheduling Algorithm: The Load balancer of LVS has the following scheduling rules: round-robin (RR); Weighted Round-Robin (WRR; each new connection is assigned to each physical server in turn. Least-connected (LC); weighted least-connected (wlc). Each new connection is assigned to the server with the least load.
(3) Persistent client connection (PCC for short) (persistent client connection is supported only after kernel version 2.2.10 ). All clients from the same IP address will be connected to the same physical server. The timeout value is set to seconds. PCC is set for HTTPS and cookie services. Under this scheduling rule, after the first connection, all connections from the same client (including from other ports) will be sent to the same physical server. But this also brings about a problem, because about 25% of the Internet may have the same IP address.
(4) Persistent port connection scheduling algorithm: After kernel version 2.2.12, PCC functions have been developed from a scheduling algorithm (you can select different scheduling algorithms: RR, WRR, LC, wlc, PCC) has evolved into a switch option (you can set RR, WRR, LC, and wlc to attributes of PCC ). If you do not select a scheduling algorithm, ipvsadm uses the wlc algorithm by default. In the persistent port connection (PPC) algorithm, connection assignment is based on ports, for example, requests from ports 80 and 443 of the same terminal, it will be allocated to different physical servers. Unfortunately, if you need to use cookies on your website, the problem may occur because HTTP uses port 80, but Cookies need port 443. In this way, cookies may be abnormal.
(5) load node feature of Linux ctor: enables Load balancer to process users requests as well.
(6) ipvs connection synchronization.
(7) ARP problem of LVS/TUN and LVS/DR: this problem only exists in LVS/DR, LVS/tun.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.