LVS Load Balancing cluster

Source: Internet
Author: User
Tags dedicated server dedicated ip node server

LVS Load Balancing cluster

Types of clusters
1, load Balancing cluster: To improve the corresponding ability of the application system, as far as possible to deal with more corresponding requests, less latency as the target, get the overall performance of high load concurrency. For example, the load distribution of the reverse proxy, LB relies on the Zhujie point of the shunt algorithm, the customer and the request to the multiple server nodes, thereby alleviating the load pressure of the entire system.
2. Highly available clusters: to improve the reliability of the system, as far as possible to reduce the interruption time as the goal, to ensure the continuity of service to achieve high availability (HA) fault-tolerant effect. For example: Fault cutting, dual standby, multi-level hot standby, etc. are high-availability clustering technology.
3, high-performance computing cluster: To improve the application system CPU speed, the expansion of hardware resources and analysis capabilities for cloth marked, to achieve the equivalent of large, super-computer high-performance computing (HPC) capabilities. For example: Cloud computing, Grid computing, also can be regarded as a high-performance computing, high-performance computing cluster high-performance relies on, distributed computing, parallel computing, through dedicated hardware and software to the CPU, memory and other resources of multiple servers together, to achieve only large, super computers have the computing power.

Tiered structure for load balancing

1, the first layer, load scheduler: This is the only access to the entire cluster system, external use of all servers common v-ip (virtual IP) address, has become the cluster IP address. Typically, the primary is configured, and two schedulers are used to implement hot backup, and when the ink scheduler fails, it is replaced with a smooth replacement to the standby scheduler to ensure high availability
2, the second tier, the server pool: The cluster provides application services (such as FTP HTTP) by the server pool, where each node has a separate r-ip (real IP, real IP) address, only handle the client request assigned by the scheduler, when a node temporarily invalid, The fault-tolerant mechanism responsible for the scheduler isolates it, waiting for the error to be excluded and then re-incorporated into the server pool.
3. Third tier, shared storage: Provides stable, consistent file access services for all nodes in the server pool, ensuring the consistency of the entire cluster, in a Linux/unix environment, shared storage can use NAS devices, or provide NFS (network file System Network file systems) a dedicated server that shares services.

Load Balancing mode of operation
The load scheduling technology of the cluster can be distributed based on IP, port, content and so on, in which the load scheduling based on IP is the most efficient. The most common types of IP-based load balancing models are the following three

1, address translation: referred to as NAT mode, similar to the private network structure of the firewall, the load scheduler as a gateway to all server nodes, that is, as a client access to the portal, but also the nodes to respond to customers and access exits. Server nodes use private IP addresses, which are located in the same physical network as the load scheduler, and are more secure than the other two methods.
2, IP tunnel: Short tun mode, the use of open network structure, load scheduler only as a client access portal, each node through their own Internet connection directly reply to the client, not through the load scheduler, the server node scattered in different locations in the Internet, with a separate public IP address, communicates with the load scheduler through a dedicated IP tunnel.
3, Direct routing: referred to as Dr Mode, using semi-open network structure, similar to the structure of Tun mode, but the nodes are not scattered around, but with the scheduler located in the same physical network. Responsible for the scheduler and the node server through the local network connection, do not need to establish a dedicated IP tunnel.

LVS Virtual Server
LVS is now part of the Linux kernel and is compiled by default as a Ip_vs module, which can be called automatically if necessary. How to Centos7 the operation you can manually load the Ip_vs module and view the version information of the Ip_vs module in the current system.
[Email protected] ~]# modprobe Ip_vs
[Email protected] ~]# Cat/proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
1, the Load scheduling algorithm of LVS, there are four kinds of common.
1), polling: The received server access requests are sequentially assigned to each node in the cluster, treating each server equally, regardless of the actual number of connections and the system load of the server.
2), weighted polling: according to the processing capacity of the real server in turn to allocate received access requests, the scheduler can automatically query the load situation of each node, and dynamically adjust its weight, so that the processing capacity of the server to undertake more traffic.
3), minimum connection: Based on the number of connections established by the real server to allocate, the received access requests are limited to the node with the least number of connections, if all the server node performance is similar, in this way can better balance the load
4), the least weighted connection: in the case of large differences in performance of the server node, you can automatically adjust the weight for the real server, higher-weighted nodes will assume a larger proportion of the active link load.
2. Using Ipvsadm management Tools
Ipvsad is the LVS Cluster management tool that is used on the load scheduler to add, remove, and view the cluster health status by calling the Ip_vs module. In Centos7, manual installation is required IPVSADM.X86_64.0.1.27-7.EL7
[Email protected] ~]# mount/dev/cdrom/mnt/
Mount:/dev/sr0 write protection, will be mounted as read-only
[Email protected] ~]# rm-rf/etc/yum.repos.d/*
[Email protected] ~]# Vi/etc/yum.repos.d/centos.repo
[Local]
Name=local
baseurl=file:///mnt/
Enabled=1
Gpgcheck=0
: Wq
[Email protected] ~]# yum-y install Ipvsadm
The management of LVS cluster mainly includes creating virtual server, adding server node, viewing cluster node status, removing server node and saving load distribution policy. Let's show you how to do the IPVSADM command separately.
1), create a virtual server
If the IP address of the cluster is 192.168.10.20, for TCP 80 port to provide load shunt service, the scheduling algorithm used for polling, the corresponding IPVSADM command operation is as follows, for the Load Balancer Scheduler, the Virtual IP must be the actual enabled IP address of the machine.
[[email protected] ~]# ipvsadm-a-t 192.168.10.20:80-s RR
In the above command, the option:-A indicates the addition of a virtual server.
Option:-T to specify the virtual IP address and port.
Option:-S is used to specify the load scheduling algorithm, RR: Polling, WRR: Weighted polling, LC: Least Connection, WLC: weighted least connection
2), Add server node
Add four server nodes to the virtual server 192.168.10.20, with the following commands for the IP addresses 192.168.10.30, 40, 50, 60, respectively.
[[email protected] ~]# ipvsadm-a-T 192.168.10.20:80-r 192.168.10.30:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.20:80-r 192.168.10.40:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.20:80-r 192.168.10.50:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.20:80-r 192.168.10.60:80-m-W 1
In the above command, the option:-a means adding a real server
Option:-T indicates the specified virtual IP address and TCP port
Option:-R to specify the RIP address and TCP port,
Option:-M to specify NAT cluster mode (-G Dr Mode,-I tun mode)
Option:-W is used to set weights (a weight of 0 means that the node is paused)
3), view cluster node status
[Email protected] ~]# IPVSADM-LN # #查看服务器节点状态
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
TCP 192.168.10.20:80 RR
-192.168.10.30:80 MASQ 1 0 0
-192.168.10.40:80 MASQ 1 0 0
-192.168.10.50:80 MASQ 1 0 0
-192.168.10.60:80 MASQ 1 0 0
In the above results, the MASQ in the Forward column is (masquerading address), which indicates that the cluster mode is NAT mode, and if the route, the cluster mode is Dr
To delete a server node

4), delete the server node
To remove a node from the pool, use the option:-D. You must specify the target object, including the node address, virtual IP address, to perform the delete operation
For example, the following command will delete the nodes in the LVS cluster 192.168.10.20 192.168.10.30
[Email protected] ~]# ipvsadm-d-R 192.168.10.30:80-t 192.168.10.20:80
[Email protected] ~]# IPVSADM-LN
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
TCP 192.168.10.20:80 RR
-192.168.10.40:80 MASQ 1 0 0
-192.168.10.50:80 MASQ 1 0 0
-192.168.10.60:80 MASQ 1 0 0
When you need to delete a virtual server, use the option:-D and create a virtual IP address, without specifying a node,
For example, the following actions
[Email protected] ~]# ipvsadm-d-t 192.168.10.20:80
[Email protected] ~]# IPVSADM-LN
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn

5), Save the load distribution policy
Using the Import/Export tool: Ipvsadm-save: Save. Ipvsadm-restore: Restore LVS policy. You can also use the Quick Purge, rebuild load distribution policy
[[email protected] ~]# ipvsadm-a-t 192.168.10.20:80-s RR
[[email protected] ~]# ipvsadm-a-T 192.168.10.20:80-r 192.168.10.60:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.20:80-r 192.168.10.50:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.20:80-r 192.168.10.40:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.20:80-r 192.168.10.30:80-m-W 1
[Email protected] ~]# ipvsadm-save >/etc/sysconfig/ipvsadm # #保存策略
[Email protected] ~]# Cat/etc/sysconfig/ipvsadm # #查看保存的结果
-A-T Localhost.localdomain:http-s RR
-a-t Localhost.localdomain:http-r 192.168.10.30:http-m-W 1
-a-t Localhost.localdomain:http-r 192.168.10.40:http-m-W 1
-a-t Localhost.localdomain:http-r 192.168.10.50:http-m-W 1
-a-t Localhost.localdomain:http-r 192.168.10.60:http-m-W 1
[[email protected] ~]# systemctl Stop Ipvsadm # #停止服务 (Purge policy)
[[email protected] ~]# systemctl start Ipvsadm # #启动服务 (Rebuild rule)

Shared Storage server: NFS
NFS is a network File system protocol based on TCP/IP transport, which allows clients to access shared resources on a remote server as if they were accessing a local directory by using the NFS protocol.
1. Use NFS to publish shared resources
The implementation of the NFS service relies on the RPC (Remote procedure Call) mechanism to complete the mapping process remotely to the local and we need the following two installation packages
Nfs-utils: for NFS shared Publishing and access
Rpcbind: for RPC support
The operating server is: 192.168.10.70
1), install Nfs-utils, Rpcbind package: and set boot boot.
[Email protected] ~]# mount/dev/cdrom/mnt/
Mount:/dev/sr0 write protection, will be mounted as read-only
[Email protected] ~]# rm-rf/etc/yum.repos.d/*
[Email protected] ~]# Vi/etc/yum.repos.d/centos.repo
[Local]
Name=local
baseurl=file:///mnt/
Enabled=1
Gpgcheck=0
: Wq

[Email protected] ~]# yum-y install nfs-utils rpcbind
[[email protected] ~]# Systemctl enable NFS
[Email protected] ~]# Systemctl enable Rpcbind
2), set up a shared directory
The configuration file for NFS is/etc/exports, and the file contents are empty by default (no sharing).
The format of the exports file is "Directory Location client address (permission options)"
[Email protected] ~]# mkdir-p/opt/wwwroot/# #创建共享路径
[Email protected] ~]# Vi/etc/exports # #修改配置文件
/opt/wwwroot 192.168.10.0/24 (Rw,sync,no_root_squash)
: Wq
Where the client address can be the hostname, IP address, IP network segment, allowed use、? The wildcard character; RW in the permission option means that read-write is allowed (RO is read-only), sync indicates synchronous write, and no_root_squash means that when the client is accessed as root, the local root privilege is given, (by default, Root_squash, will be treated as a nfsnobody user).
If you need to share the same shared directory with different clients, and if the permissions are not assigned at the same time, use spaces to split multiple clients (permission options), such as the following:
[Email protected] ~]# Vi/etc/exports
/opt/wwwroot 192.168.10.30 (RO) 192.168.10.40 (rw)
: Wq
3), Start the NFS service program
[Email protected] ~]# systemctl start Rpcbind
[[email protected] ~]# Systemctl start NFS
[Email protected] ~]# NETSTAT-ANPT | grep RPC
TCP 0 0 0.0.0.0:20048 0.0.0.0:
LISTEN 1592/rpc.mountd
TCP 0 0 0.0.0.0:52291 0.0.0.0:LISTEN 1590/RPC.STATD
TCP6 0 0::: 20048::
:
LISTEN 1592/rpc.mountd
TCP6 0 0::: 46238:::LISTEN 1590/RPC.STATD
4), view the NFS shared directory that is published natively
[Email protected] ~]# SHOWMOUNT-E
Export list for Localhost.localdomain:
/opt/wwwroot 192.168.10.0/24
2. Accessing NFS shared resources in the client
The goal of the NFS protocol is to provide a network file system, so access to the NFS share is mounted with the Mount command, with the file System type NFS, either manually mounted or by using the Fstab configuration file to automatically mount the boot. The stability of the cluster system is recorded, it is recommended to use a dedicated network to connect.
The operating server is: 192.168.10.30\40\50\60 node server
1), install Rpcbind, nfs-utils software package, and set to boot.
Install packages on four node servers separately
[Email protected] ~]# mount/dev/cdrom/mnt/
Mount:/dev/sr0 write protection, will be mounted as read-only
[Email protected] ~]# rm-rf/etc/yum.repos.d/

[Email protected] ~]# Vi/etc/yum.repos.d/centos.repo
[Local]
Name=local
baseurl=file:///mnt/
Enabled=1
Gpgcheck=0
: Wq

[[email protected] ~]# yum-y install nfs-utils rpcbind
[[email protected] ~]# systemctl enable rpcbind< Br>[[email protected] ~]# systemctl start Rpcbind
If nfs-utils packages are installed, you can also view the directories that are shared by the specified NFS shared server in the client
Example: the following command
[[email protected] ~]# showmount-e 192.168.10.70
Export list for 192.168.10.70:
/opt/wwwroot 192.168.10.0/24
2), manually mount the NFS shared directory
mount operation as Root, mount the/opt/wwwroot directory shared by the NFS server to the local directory/var/www/html
[email  protected] ~]# mkdir-p/var/www/html
[[email protected] ~]# Mount 192.168.10.70:/opt/wwwroot/var/www/ HTML
[[email protected] ~]# tail-1/etc/mtab # #确认挂载结果
192.168.10.70:/opt/wwwroot/var/www/html nfs4 rw, Relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys, clientaddr=192.168.10.30,local_lock=none,addr=192.168.10.70 0 0
[[email protected] ~]# vi/var/www/html/ index.html
: Wq

View shared paths for 192.168.10.70 NFS shared servers
[Email protected] ~]# ls/opt/wwwroot/
Index.html # #看到我们刚刚创建的网页文档, indicates a shared success

3), Fstab automatic mount
Modify the Fstab configuration file, note that the file system type is NFS, mount parameters suggest adding "_netdev" (Network device)
[Email protected] ~]# Vi/etc/fstab
192.168.10.70:/opt/wwwroot/var/www/html NFS Defaults,_netdev 0 0
: Wq

Building a LVS load Balancing cluster
Address Translation mode (Lvs-nat)
1, prepare the case environment
In a NAT-mode cluster, the LVS load scheduler makes all the nodes of the Internet gateway server, the extranet address 10.10.10.10 Colleague is also the entire cluster of the V IP address, LVS has two network cards, respectively connected to the intranet, the outside network.

For the LVS load scheduler, you need to turn on routing forwarding rules so that the node server can access the Internet. All node servers, shared storage servers are not in the language private network, the default gateway is the LVS Load Scheduler Intranet address (192.168.10.20)
2. Configure the load Scheduler-operate on the 192.168.10.20 server
Restart the server and add a new NIC to edit the network configuration file.
[Email protected] ~]# cp/etc/sysconfig/network-scripts/ifcfg-ens32/etc/sysconfig/network-scripts/ifcfg-ens34
[Email protected] ~]# vi/etc/sysconfig/network-scripts/ifcfg-ens34
Type=ethernet
Defroute=yes
Ipv4_failure_fatal=no
Name=ens34
Device=ens34
Onboot=yes
ipaddr=10.10.10.10
Prefix=24
gateway=10.10.10.1
: Wq
[Email protected] ~]# systemctl Restart Network
1), turn on route forwarding rules
[Email protected] ~]# vi/etc/sysctl.conf
Net.ipv4.ip_forward = 1
: Wq
[Email protected] ~]# sysctl-p
Net.ipv4.ip_forward = 1
2) Configure the Load allocation policy
[Email protected] ~]# Ipvsadm-c # #清除原有策略
[[email protected] ~]# ipvsadm-a-t 10.10.10.10:80-s RR
[[email protected] ~]# ipvsadm-a-T 10.10.10.10:80-r 192.168.10.30:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 10.10.10.10:80-r 192.168.10.40:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 10.10.10.10:80-r 192.168.10.50:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 10.10.10.10:80-r 192.168.10.60:80-m-W 1
[Email protected] ~]# Ipvsadm-save # #保存策略
-A-T Localhost.localdomain:http-s RR
-a-t Localhost.localdomain:http-r 192.168.10.30:http-m-W 1
-a-t Localhost.localdomain:http-r 192.168.10.40:http-m-W 1
-a-t Localhost.localdomain:http-r 192.168.10.50:http-m-W 1
-a-t Localhost.localdomain:http-r 192.168.10.60:http-m-W 1
[Email protected] ~]# Systemctl enable Ipvsadm
3. Node server-operate on 192.168.10.30-40-50-60
All node servers use the same configuration, including httpd service ports, Web site document content. In fact, the site documents of each node can be stored in a shared storage device, thereby eliminating the process of synchronization, but in the case of debugging phase can be used for each node different pages, in order to test the load balancer effect
1), install HTTPD, create test page
[Email protected] ~]# yum-y install nfs-utils rpcbind httpd
[Email protected] ~]# mkdir-p/var/www/html
[email protected] ~]# Mount 192.168.10.70:/opt/wwwroot/var/www/html # #测试群集权重效果可以跳过
[Email protected] ~]# vi/var/www/html/index.html
: Wq
2), enable HTTPD service
[Email protected] ~]# systemctl start httpd
[Email protected] ~]# Systemctl enable httpd
3), set the gateway of the 4 node server to 192.168.10.20
[Email protected] ~]# VI/ETC/SYSCONFIG/NETWORK-SCRIPTS/IFCFG-ENS32
.... # #省略部分
ipaddr=192.168.10.30
Prefix=24
gateway=192.168.10.20
[Email protected] ~]# systemctl Restart Network
4. Test the LVS Cluster
Can arrange more than one test machine, Gateway point to the LVS extranet address (10.10.10.10), use the browser to access http://10.10.10.10 directly can see the real server provides Web document, such as: 4 Server Web page document is different, one can try to refresh several times, Validates the weight effect.
In the LVS load scheduler, you can observe the current load distribution by looking at the node state.
[Email protected] ~]# IPVSADM-LN
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
TCP 10.10.10.10:80 RR
-192.168.10.30:80 MASQ 1 1 0
-192.168.10.40:80 MASQ 1 0 1
-192.168.10.50:80 MASQ 1 0 1
-192.168.10.60:80 MASQ 1 0 1

Direct route mode (LVS-DR)
1, prepare the case environment
In a DR-mode cluster, the LVS Load Scheduler acts as the Access portal for the cluster, but not as a gateway, and all nodes in the server pool are involved in the Internet, and the response packets sent to the client do not need to go through the LVS load scheduler.
In this way inbound, outbound access data is processed separately, so the LVS load scheduler and the nodes have to configure VIP addresses in order to respond to access to the entire cluster, recording the security of the data store, the shared storage device is placed in the internal private network

2. Configuring the Load Scheduler
Operation 192.168.10.20 Load Dispatch server
1), configure virtual IP address
Use virtual interface (ens32:0) to bind VIP addresses for Wang Ke ens32 in response to cluster access
Configuration results are ens32 192.168.10.20, ens33:0 192.168.10.21
[Email protected] ~]# cp/etc/sysconfig/network-scripts/ifcfg-ens32/etc/sysconfig/network-scripts/ifcfg-ens32:0
[Email protected] ~]# vi/etc/sysconfig/network-scripts/ifcfg-ens32:0
Type=ethernet
Defroute=yes
Ipv4_failure_fatal=no
name=ens32:0
device=ens32:0
Onboot=yes
ipaddr=192.168.10.21
Prefix=24
gateway=192.168.10.1
dns1=202.106.0.20
: Wq
[Email protected] ~]# ifup ens32:0
[Email protected] ~]# ifconfig ens32:0
ens32:0: Flags=4163<up,broadcast,running,multicast> MTU 1500
inet 192.168.10.21 netmask 255.255.255.0 broadcast 192.168.10.255
Ether 00:0c:29:2a:25:be Txqueuelen (Ethernet)
2), adjust the corresponding parameters/proc
For the DR Cluster mode, the redirection parameter response of the Linux kernel should be turned off because the LVS load scheduler and the nodes need to share the VIP address.
[Email protected] ~]# vi/etc/sysctl.conf
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens32.send_redirects = 0
: Wq
3), configure the Load distribution policy
[Email protected] ~]# Ipvsadm-c # #清除原有策略
[[email protected] ~]# ipvsadm-a-t 192.168.10.21:80-s RR
[[email protected] ~]# ipvsadm-a-T 192.168.10.21:80-r 192.168.10.30:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.21:80-r 192.168.10.40:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.21:80-r 192.168.10.50:80-m-W 1
[[email protected] ~]# ipvsadm-a-T 192.168.10.21:80-r 192.168.10.60:80-m-W 1
[Email protected] ~]# Ipvsadm-save # #保存策略
[Email protected] ~]# Systemctl enable Ipvsadm

3. Configure the node server
When using Dr Mode, the node server also needs to configure the VIP address and adjust the internal ARP response parameters to prevent updates to the VIP MAC address and avoid conflicts. In addition, Web services are configured in a similar way to Nat.
1), configure virtual IP address
In putting each node server, also need to have VIP address: 192. 168.10.21, but this address is only used to send the original address of the Web response packet, do not need to listen to the client's access request (to be monitored and distributed by the scheduler), so use the virtual connection lo:0 to host the VIP address, not natively add a route path, the data access to the VIP local, To avoid communication disorders.
[Email protected] ~]# cp/etc/sysconfig/network-scripts/ifcfg-lo/etc/sysconfig/network-scripts/ifcfg-lo:0
[Email protected] ~]# vi/etc/sysconfig/network-scripts/ifcfg-lo:0
device=lo:0
ipaddr=192.168.10.21
netmask=255.255.255.255 # #注意子网掩码全为1
Onboot=yes
Name=loopback
: Wq
[Email protected] ~]# ifup ifcfg-lo:0
[Email protected] ~]# ifconfig lo:0
lo:0: Flags=73<up,loopback,running> MTU 65536
inet 192.168.10.21 netmask 255.255.255.255
Loop Txqueuelen 1 (Local Loopback)
[Email protected] ~]# vi/etc/rc.local
/sbinroute add-host 192.168.10.21 Dev lo:0 # #添加进去即可
: Wq
[[Email protected]host ~]# Route add-host 192.168.10.21 Dev lo:0
2), adjust the/proc response
[Email protected] ~]# vi/etc/sysctl.conf
Net.ipv4.conf.all.arp_ignore = 1
Net.ipv4.conf.all.arp_announce = 2
Net.ipv4.conf.default.arp_ignore = 1
Net.ipv4.conf.default.arp_announce = 2
Net.ipv4.conf.lo.arp_ignore = 1
Net.ipv4.conf.lo.arp_announce = 2
: Wq
3), install HTTPD, create test page
[Email protected] ~]# yum-y install nfs-utils rpcbind httpd
[Email protected] ~]# mkdir-p/var/www/html
[email protected] ~]# Mount 192.168.10.70:/opt/wwwroot/var/www/html # #测试群集权重效果可以跳过
[Email protected] ~]# vi/var/www/html/index.html
: Wq
4), enable HTTPD service
[Email protected] ~]# systemctl start httpd
[Email protected] ~]# Systemctl enable httpd
Repeat the above configuration on the other node servers

4. Test the LVS Cluster
Can arrange a number of test machines, using the browser directly from the external network access to http://192.168.10.21 directly can see the real server provided by the Web document, such as: 4 Server Web page document is not the same, one can try to refresh several times, to verify the weight effect.
In the LVS load scheduler, you can observe the current load distribution by looking at the node state.
[Email protected] ~]# IPVSADM-LN
IP Virtual Server version 1.2.1 (size=4096)
Prot Localaddress:port Scheduler Flags
Remoteaddress:port Forward Weight activeconn inactconn
TCP 10.10.10.10:80 RR
-192.168.10.30:80 MASQ 1 1 0
-192.168.10.40:80 MASQ 1 0 1
-192.168.10.50:80 MASQ 1 0 1
-192.168.10.60:80 MASQ 1 0 1

LVS Load Balancing cluster

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.