Linux Cluster service LVS

Source: Internet
Author: User
Tags app service curl

The Linux virtual Server (LVS) project provides the most common load balancing software on the Linux operating system.

Cluster definition :

Cluster (cluster) technology is a relatively new technology, through clustering technology, can be paid at a lower cost in the performance, reliability, flexibility of relatively high returns, and its task scheduling is in the cluster system

's core technology. In this paper, the definition of cluster system, development trend, task scheduling and other issues are briefly discussed. A cluster is a set of independent computers interconnected by high-speed networks, which form a group and are managed in a single system mode. When a customer interacts with a cluster, the cluster is like a separate server. Cluster configuration is used to improve availability and scalability.

Key benefits of a clustered system: (High scalability, high availability, high performance, cost-effective)

Currently running three of the most mainstream Linux clusters:

One of the load clusters: LB (Load Balancing)

When the load-balancing cluster is running, the request resource information sent by the user is usually transmitted through one or more front-end load balancers (Director server), which is delivered to a set of application servers on the backend (Real server) through a scheduling algorithm, thus achieving high performance and high availability of the whole system. Such a computer cluster is sometimes called a server farm

General high-availability clusters and load-balancing clusters use similar technologies, or have both high availability and load balancing characteristics.

Load cluster II: HA (high-availability)

In general, when a node in a cluster fails, the tasks on it are automatically transferred to other normal nodes. You can also perform offline maintenance of a node in the cluster and then go online, which does not affect

The entire cluster is running.

Highly available clusters: a cluster of highly available capabilities to ensure that services remain online

Metrics: availability = online Time/(online time + fault handling time)

Three of the load cluster: HP

High-performance computing clusters are used in the field of scientific computing by allocating computing tasks to different computing nodes of the cluster to improve computing power. The more popular HPC uses the Linux operating system and some other free soft

To complete the parallel operation. This cluster configuration is often referred to as Beowulf clusters. Such clusters typically run specific programs to perform the parallel capabilities of HPC cluster. Such programs typically use a specific runtime library, such as an MPI library designed for scientific computing.

HPC clusters are particularly well suited for computing jobs where large amounts of data are communicated between compute nodes in a calculation, such as the intermediate results of a node or the results that affect other nodes.

High-performance processing clusters:

Using Distributed storage: Distributed file systems, distributed file systems cutting a big task into small tasks, processing them separately

LVS System Structure:

Load Balancer (Balancer), server Group (Aarry), shared storage (GKFX Storage)

Load Balancer Layer :

The entire Cluster service is the most front-end device, it has one or more schedulers (Director server), the LVS software runs on the dispatch server.

functions of the Dispatch server :

将用户的请求,根据调度算法进行IP分流,将数据包发送到后端应用服务器上(Real Server),如果调度服务器安装了Monitoring module Ldirectord, the dispatch server marks the failed application server as unavailable, knowing that the application server is back to normal.

Server Group Layer :

This is made up of one or more application servers (Real server), and each application server provides the same service, the dispatch server directs the user's request to the specific application server, and then responds to the client by the backend application server.

shared storage Tiers :

The function is to ensure consistency of the data provided by the application servers in the server group.

how shared storage is implemented :

Disk array, cluster file system (OCFS2)

LVS is a mechanism on a Linux system, similar to Iptables, whose related properties are defined in a way similar to the iptables command (IPVSADM).

LVS is working in the kernel space of the Linux system, providing work through the kernel, its working space on the input chain of iptables, when the client request arrives on the input chain, through the LVS rule verification, if it is inside?? The request, sent to the user space, if found to be a cluster, sends the request to the postrouting chain and to the back-end application server in response to the user's request.

Note: The LVs mentioned above is actually working on the iptables input and postrouting chains, so iptables and LVS cannot exist simultaneously on this system.

The composition of LVS:

IPVSADM: command-line tool for managing cluster services, working with user space in Linux systems

Ipvs: Kernel module serving LVS, working in kernel space (relative to framework, ipvsadm adding rules to implement IPVS function)

Note: In the kernel before the Linux kernel 2.4.23 the module does not exist by default, you need to manually patch, and then compile the module into the kernel to use this function

Type of LVS:

Lvs-nat mode, LVS-DR mode, Lvs-tun mode

NAT: (Network address translation)

Principle: The IP header destination address of the IP packet sent by the user's request, the address of the real server which is converted to the backend service by the LVS server and the user's request message is sent to the application server. While the application server opens the message and responds to the user request to send and pass the LVS server, the LVS server modifies the source address to the VIP address on the LVS server interface.

Nat Mode Features:

123456789 用户发来的请求和响应,都必须经过LVS服务器。集群节点跟Director必须在同一个IP网络中;RIP通常是私有地址,仅用于各集群节点间的通信;Director位于Client和Real Server之间,并负责处理进出的所有通信;Realserver必须将网关指向DIP地址;支持端口映射;Realserver可以使用任意OS;LVS服务器必须有两块网卡较大规模应该场景中,Director易成为系统瓶颈;

DR: (Direct routing)

The DR Mode works at the data link layer, its principle, the LVS server and the application server use the same IP address to serve externally, but only the LVS server responds to the ARP request, and all application servers remain silent on the ARP request of the IP address itself. The gateway directs all ARP requests to the LVS server, and the LVS server receives the user request data packets, the IP shunt according to the scheduling algorithm, then the corresponding MAC address modification, sends to the backend corresponding application server.

Note: The LVS server and the application server must be in the same broadcast domain because the LVS server modifies the two-tier packet.

Dr Mode features :

12345 集群节点跟director必须在同一个物理网络中;RIP可以使用公网地址,实现便捷的远程管理和监控;Director仅负责处理入站请求,响应报文则由Real Server直接发往客户端;Real Server不能将网关指向DIP;不支持端口映射;

Note: In DR Mode, the LVS server is only responsible for receiving user requests, according to the scheduling algorithm and IP shunt, direct route forwarding, and its response to the real server to handle the packet.

Dr Mode is the best performance in three modes, the only drawback is that the LVS server and the backend application server must be in the same broadcast domain, so it is not possible to implement the cross-network application of the cluster.

TUN (IP tunnel mode)

Tun Mode, LVS re-encapsulates TCP/IP requests and forwards them to the target application server, with the target application server responding to user requests. LVS router and real server pass the tunneling technology via TP tunnel

So that they can exist in different networks.

Note: Because the application server needs to restore the messages sent by LVS, the application server should also support the IP tunnel protocol. (Network options)

Tun Mode features:

123456 集群节点可以跨越Internet;RIP必须是公网地址;Director仅负责处理入站请求,响应报文则由Real Server直接发往客户端;Real Server网关不能指向director;只有支持隧道功能的OS才能用于Real Server;不支持端口映射;

Eight scheduling algorithms of LVS load balancing:

Rr-->wrr-->lc-->wlc-->lblc-->lblcr-->dh-->sh

Round Robin (Round):

The algorithm distributes user requests sequentially to the backend application server, treating all real servers equally, rather than counting the links and loads on a specific server.

Weighted round call (Weighted Round Robin):

According to the different load ability of each application server, the scheduling algorithm can set different weights for the server, and the weight of the application server with strong processing ability is set large, in response to more user requests.

Minimum connection (Least Connections):

The algorithm assigns the requests sent by the user to the application server with fewer connections.

Weighted minimum connection (Weighted Least Connections):

According to the different load ability of application server, the algorithm sets different weights and weights, and the application server with a large weight and few connection requests will assign user request information preferentially.

Minimum connection based on locality: (locality-based Least Connections):

The algorithm aims at the load balancing algorithm of the target IP address, which is mainly used in the cache cluster system. This algorithm will find the application server closest to the target address according to the target IP address requested by the user, if the server is not overloaded, the request is distributed to the application server, if the server is unavailable or the load is large, the least connection algorithm is used to select the target application server

Local least-connection with replication (locality-based Least Connections wiht Replication)

The algorithm is also a load balancing algorithm for the target IP address, which is mainly used for the cache cluster system. The difference between domain LBLC is that the former maintains a mapping of an IP address to a set of servers. The latter is a mapping that maintains an IP address to an application server.

Destination Address hash (Destination Hashing)

The algorithm uses the destination address requested by the user as the hash key, and attempts to find the corresponding application server from the statically allocated hash list. If the target application server is not overloaded, the user's request information is distributed to the app service, otherwise the return is empty.

Source Address hash (source Hashing)

The algorithm uses the source address of the request as a hash key and attempts to find the corresponding application server from a statically allocated hash list. If the target application server is available and is not overloaded, the information requested by the user is distributed to the application server, otherwise the return is empty.

LVS IP Address Name Conventions: (LVS ip-address naming specification)

12345 Director‘s IP (DIP) address :中间层,根据不同模式,来接收并响应用户的请求。Virtual IP (VIP) address:向外提供服务的地址。Real IP (RIP) address :Real Server IP:后端提供应用服务的主机地址。Director‘s IP (DIP) address :和内部的IP通信所使用的地址:设置在Director Server上Client computer‘s IP (CIP) address:客户端地址

The IPVSADM command details:

Pvsadm: A command-line tool for managing cluster services, while a module in the IPVS system kernel

Basic use of the IPVSADM command:

1234567891011121314151617181920 -A:在内核的虚拟服务器列表中添加一条VIP记录-E:修改内核虚拟服务器列表中的一条VIP记录-D:删除内核虚拟服务器列表中的一条VIP记录-C:清空内核虚拟服务器列表中的所有VIP记录-S:保存虚拟服务器规则-R:恢复虚拟服务器策略规则-a:在内核虚拟服务器列表中添加一个应用服务器的地址。-e:修改一个虚拟服务器列表中的一条应用服务器地址记录-d:删除一个虚拟服务器列表中的一条应用服务器地址记录-L/-l: 查看内核虚拟服务器列表-Z:将内核中的虚拟服务器计数清为0-t service-address:指定虚拟服务器使用TCP服务-u service-address:指定虚拟服务器使用UDP服务-s scheduler:指定调度算法:-p timeout:在应用服务器上的持续服务时间,单位为秒-r service-address:指定应用服务器的地址-g:指定LVS工作模式为直接路由(DR-defalut)-I:指定LVS工作模式为隧道模式(Ip Tunnel)-m:指定LVS工作模式为地址转换模式(NAT)-w:设定应用服务器的权值

Common open source software for load balancing: Nginx, LVS, keepalived
Commercial hardware load devices: F5, Netscale
1, LB, LVS Introduction
LB cluster is a shorthand for the load balance cluster, and translation into Chinese is a loading balancing cluster;
LVS is an open source software project that implements load balancing clusters;
The LVS architecture can be logically divided into the scheduling layer (Director), the server cluster layer (Real server) and the shared storage layer.


LVS can be divided into three modes of operation:
NAT (The scheduler changes the requested destination IP, the VIP address to the IP of the real server, the returned packets also go through the scheduler, and the scheduler then modifies the source address to the VIP)
TUN (The scheduler encapsulates the requested packet encryption over the IP tunnel to the back-end real server, and real server returns the data directly to the client without the scheduler)
DR (The scheduler changes the destination MAC address of the requested packet to the MAC address of the real server and returns to the client without going through the scheduler)
Spacer.gif

LVS Scheduling algorithm: Round call scheduling (Round Robin) (abbreviated RR), weighted round call (Weighted Round Robin) (WRR), least link (least connection) (LC), weighted least link (Weighted least Connections) (WLC), etc.;

2. Lvs/nat Configuration
Preparatory work:
Need to prepare three machines clean CentOS6.6 system, Director machine need to install two network card;
Three servers one as director, two as real server

Director has an external network ip:192.168.22.11 and an intranet ip:192.168.11.11
The two real servers only have intranet IP: 192.168.11.100 and 192.168.11.101, and the intranet gateway is required to set the network ip:192.168.11.11 of the director.
Device=eth1
Type=ethernet
Onboot=yes
Bootproto=static
ipaddr=192.168.11.100
gateway=192.168.11.11 after changing the gateway needs to restart the network card, first down after the up, in a command implementation; If Ifdown, SSH will be interrupted;

# ifdown eth1 && ifup eth1
Director install Ipvsadm: #yum install-y Ipvsadm

Two real servers install Nginx and need to install the Epel extension source first.
Yum Install-y epel-release
Yum install-y Nginx
Boot Nginx:/etc/init.d/nginx Start after installation is complete
Change the hostname of three servers to Dr, Rs1, RS2


DIRECOTR on vi/usr/local/sbin/lvs_nat.sh//Add the following:
#! /bin/bash

Echo 1 >/proc/sys/net/ipv4/ip_forward

echo 0 >/proc/sys/net/ipv4/conf/all/send_redirects
echo 0 >/proc/sys/net/ipv4/conf/default/send_redirects
echo 0 >/proc/sys/net/ipv4/conf/eth0/send_redirects
echo 0 >/proc/sys/net/ipv4/conf/eth1/send_redirects

Iptables-t nat-f
Iptables-t Nat-x
Iptables-t nat-a postrouting-s 192.168.11.0/24-j Masquerade

Ipvsadm= '/sbin/ipvsadm '
$IPVSADM-C
$IPVSADM-A-T 192.168.22.11:80-s WLC
$IPVSADM-T 192.168.22.11:80-r 192.168.11.100:80-m-W 2
$IPVSADM-A-t 192.168.22.11:80-r 192.168.11.101:80-m-W 1 run this script directly to complete the configuration of the Lvs/nat:

/bin/bash/usr/local/sbin/lvs_nat.sh

Dr View Nat's Iptables
[Email protected] ~]# iptables-t NAT-NVL
Chain postrouting (Policy ACCEPT 1 packets, 124 bytes)
Pkts bytes Target prot opt in Out source destination
0 0 Masquerade All--* * 192.168.11.0/24 0.0.0.0/0
IPVSADM-LN View rules for Ipvsadm

Browser Open 192.168.11.100, 192.168.11.101 display Nginx Welcome page

Modify the HTML file on rs1 and rs2 to distinguish;
[Email protected] ~]# cat/usr/share/nginx/html/index.html
Rs1rs1rs1
[Email protected] ~]# cat/usr/share/nginx/html/index.html
RS2RS2RS2 testing the contents of two machines through a browser
Browser Open 192.168.22.11, will display the HTML content of rs1 or rs2;

Change the polling rule for WLC, with a weight of 2, to test

With another Linux machine Curl test, appear 2 times 1, 1 times 2, switch back and forth the description OK;
[[email protected] ~]# Curl 192.168.22.11
rs1rs1rs1
[[ Email protected] ~]# Curl 192.168.22.11
rs1rs1rs1
[[email protected] ~]# Curl 192.168.22.11
Rs2rs2rs2 on the Dr Machine Ipvsadm-ln can view, weight ratio, maintain the same link;
 , remoteaddress:port      Forward Weight Activeconn inactconn
tcp  192.168.22.11:80 WLC
  192.168.11.100:80        & nbsp   masq  2      0          26       &NBSP
  192.168.11.101:80            masq  1      0      &N Bsp  

3. LVS/DR Configuration
In Dr Mode, the Director is only responsible for distributing, only incoming traffic, the throughput will be very large; Real server provides data directly to the user, the security will be reduced;
The machine in the DR needs to configure the public IP, the virtual IP each machine needs to be configured, the user requests the virtual IP when the request, returns the time to poll the RS to provide;
Three machines, each machine only need to configure 1 IP,VIP is executed with a script will appear, not manually set;
Director (eth1:192.168.11.11 VIP eth1:0 192.168.11.110)
Real Server1 (eth1:192.168.11.100 VIP lo:0:192.168.11.110)
Real Server1 (eth1:192.168.11.101 VIP lo:0:192.168.11.110)

Director on vim/usr/local/sbin/lvs_dr.sh//Add the following content
#! /bin/bash
Echo 1 >/proc/sys/net/ipv4/ip_forward
Ipv=/sbin/ipvsadm
vip=192.168.11.110
rs1=192.168.11.100
Rs2=192.168.11.101
Ifconfig eth1:0 $VIP broadcast $VIP netmask 255.255.255.255 up
Route add-host $vip Dev eth1:0
$IPV-C
$IPV-A-T $VIP: 80-s RR
$IPV-A-t $vip: 80-r $rs 1:80-g-W 1
$IPV-A-t $vip: 80-r $rs 2:80-g-W 1

Two RS on: vim/usr/local/sbin/lvs_dr_rs.sh
#! /bin/bash
vip=192.168.11.110
Ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
Route Add-host $vip lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

Then the director executes: bash/usr/local/sbin/lvs_dr.sh
Performed on two rs: bash/usr/local/sbin/lvs_dr_rs.sh

After execution, ifconfig can display the virtual IP address, the DR Display Eth1:0,rs1, RS2 display lo:0;
eth1:0 Link encap:ethernet HWaddr 00:0c:29:70:4e:58
inet addr:192.168.11.110 bcast:192.168.11.110 mask:255.255.255.255
Up broadcast RUNNING multicast mtu:1500 metric:1
Interrupt:18 Base address:0x2080

lo:0 Link encap:local Loopback
inet addr:192.168.11.110 mask:255.255.255.255
Up LOOPBACK RUNNING mtu:65536 metric:1ipvsadm-ln list rules
[Email protected] ~]# IPVSADM-LN
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
TCP 192.168.11.110:80 RR
-192.168.11.100:80 Route 1 0 3
-192.168.11.101:80 Route 1 0 3 start a Linux machine separately to test, the browser test has no obvious cache;
Curl 192.168.11.110 test, each occurrence 1 times the RR polling rule OK;
[Email protected] ~]# Curl 192.168.11.110
Rs1rs1rs1
[Email protected] ~]# Curl 192.168.11.110
Rs2rs2rs2
[Email protected] ~]# Curl 192.168.11.110
Rs1rs1rs1
[Email protected] ~]# Curl 192.168.11.110
Rs2rs2rs2 change the polling algorithm to WRR, the weight is 2, and then execute the file, the error hint file already exists, because the/usr/local/sbin/lvs_dr.sh script file is already up eth1:0, so you need to add in the script: Ifconfig Eth1:0 down, then will not error;
$IPV-A-t $VIP: 80-s WRR
$IPV-A-t $vip: 80-r $rs 1:80-g-W 2
$IPV-A-t $vip: 80-r $rs 2:80-g w 1[[email protected] ~]# bash/usr/local/sbin/lvs_dr.sh
SIOCADDRT: The file already exists in one of the RS if it hangs, it will still poll the access, so it will be open for a while.
Analog, RS2 stops Nginx:/etc/init.d/nginx stop
With Curl test, the request will still be sent to RS2, but it has been prompted not to connect to the host;
[Email protected] ~]# Curl 192.168.11.110
Rs1rs1rs1
[Email protected] ~]# Curl 192.168.11.110
Rs1rs1rs1
[Email protected] ~]# Curl 192.168.11.110
Curl: (7) couldn ' t connect to Hostlvs itself does not eliminate the dead real server, so you need to combine keeplived;

LVS/DR + keepalived Build load Balancer cluster http://www.linuxidc.com/Linux/2015-06/118647.htm

Lvs+keepalived for four-layer load and high-availability http://www.linuxidc.com/Linux/2015-02/112695.htm

Lvs+keepalived high-availability load-balanced cluster architecture experiment http://www.linuxidc.com/Linux/2015-01/112560.htm

Heartbeat+lvs building a high-availability load-balancing cluster http://www.linuxidc.com/Linux/2014-09/106964.htm

Build LVS Load balance test environment http://www.linuxidc.com/Linux/2014-09/106636.htm

A pressure test report for LVs http://www.linuxidc.com/Linux/2015-03/114422.htm

Linux Cluster service LVS

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.