CentOS-LVS Load Balancing Cluster

Source: Internet
Author: User
Tags dedicated ip node server

I. Cluster Technology Overview

1. Cluster type

Both clusters have at least two node servers. In addition, they provide only one access portal, which is equivalent to a large computer. According to the target differences of the cluster, there are three types.

Server Load balancer cluster: To improve the application system's response capability, process as many access requests as possible, reduce latency, and achieve full-day performance with high concurrency and load. For example, "DNS round robin ", "Application Layer switching" and "reverse proxy" can all be used as Server Load balancer clusters.

High-availability cluster: to improve application system reliability, minimize interruption time, ensure service continuity, and achieve high-availability fault tolerance, for example, "failover ", "dual-machine Hot Standby" and "multi-machine Hot Standby" all belong to high-availability clusters.

High-performance computing cluster: To improve the CPU computing speed of the application system, expand hardware resources and analysis capabilities, and obtain high-performance computing capabilities equivalent to large and super computers, such: cloud computing is a type of high-performance computing cluster.

2. SLB layered structure

In a typical Server Load balancer cluster, the components are composed of the following three layers.

Layer 1: Server Load balancer, which is the only entry to access the entire cluster system. It uses the VIP (virtual IP) addresses shared by all servers, also known as the cluster IP address. Generally, the master and slave schedulers are configured to implement hot backup. When the master scheduler fails, it is smoothly replaced with the slave scheduler to ensure high availability.

Layer 2: Server pool: The application services provided by the cluster are undertaken by the server pool. Each node has an independent RIP (real IP) address, only process client requests distributed by the scheduler.

Layer 3: shared storage, which provides stable and consistent file access services for all nodes in the server pool to ensure the uniformity of the entire cluster. In Linux, shared storage can use NAS devices or dedicated servers that provide NFS (Network File System) sharing services.

The typical topology of the Server Load balancer cluster is as follows:

3. Load Balancing Mode

The Cluster load scheduling technology can be distributed based on IP addresses, ports, and content. The IP-based Server Load balancer has the highest efficiency. IP-based Server Load balancer usually involves address translation, IP tunneling, and direct routing. As shown in:

Address Translation: The NAT mode is similar to the private network structure of the firewall. The Server Load balancer acts as the gateway for all server nodes, that is, the access portal for the client and the access egress for each node to respond to the client, the server node uses a private IP address, which is in the same physical network as the Server Load balancer scheduler. The security is better than the other two methods.

IP tunneling: The TUN mode adopts an open network structure. The Server Load balancer acts as the client portal only. Each node directly responds to the client through its own Internet connection, instead of passing through the negative scheduler. Server nodes are scattered in different locations on the Internet and have independent public IP addresses. Generally, the dedicated IP Tunnel communicates with the Server Load balancer.

Direct routing: The DR mode adopts a semi-open network structure, which is similar to that of the TUN mode. However, nodes are not scattered across regions, but are located in the same physical network as the scheduler. The server Load balancer is connected to a node server over a local network and does not require a dedicated IP tunnel.

Ii. LVS virtual server Overview

Linux Virtual Server is a load balancing project developed for the Linux kernel. LVS is now part of the Linux kernel. It is compiled into the ip_vs module by default and can be called automatically if necessary.

1. Load Scheduling Algorithm of LVS

Round Robin (rr): requests are evenly distributed to each node in the cluster in sequence, regardless of the actual number of connections and system load of the server.

Weighted Round Robin (wrr): receives access requests in turn based on the processing capabilities of the Real Server. The scheduler can automatically query the load of each node and dynamically adjust its weight.

Least connections (lc): allocated based on the number of established connections on the Real Server. The received access requests are preferentially allocated to the nodes with the least connections.

Weighted Least connections (wlc): When the performance of service nodes varies greatly, the weights can be automatically adjusted for the Real Server. nodes with higher weights will carry a larger proportion of active connection loads.

2. Load the LVS kernel module and install the ipvsadm management tool.

Ipvsadm is an LVS cluster management tool used on the load scheduler. It calls the ip_vs module to add or delete server nodes and view the running status of the cluster.

[Root @ localhost/] # modprobe ip_vs [root @ localhost/] # rpm-ivh/media/Packages/ipvsadm-1.25-9.e16.i686.rpm

3. Use ipvsadm to manage LVS Clusters

LVS cluster management includes creating virtual servers, adding server nodes, viewing the status of cluster nodes, and deleting server nodes to save the load distribution policy.

1) create a virtual server

If the VIP address of the cluster is 172.16.16.172, the Server Load balancer service is provided for TCP port 80 and the scheduling algorithm is round robin, the command syntax is as follows. For the Server Load balancer scheduler, the VIP must be the IP address actually enabled by the Local Machine (Scheduler.

[Root @ localhost/] # ipvsadmin-A-t172.16.16.172: 80-s rr

In the preceding operation, "-A" indicates adding A virtual server. "-t" indicates the VIP address and TCP port, "-s" is used to specify the load scheduling algorithm-round robin (rr), Weighted Round Robin (wrr), least connections (lc), and weighted least connections (wlc ).

2) Add a server node

Add four server nodes to the vserver 172.16.16.172. The IP addresses are 192.168.7.21-192.168.7.24 in sequence. The corresponding ipvsadm command syntax is as follows. If you want to maintain the connection, you can use the "-p 60" parameter, where 60 is 60 seconds.

[Root @ localhost/] # ipvsadmin-a-t172.16.16.172: 80-r192.168.7.21: 80-m-w1 [root @ localhost/] # ipvsadmin-a-t172.16.16.172: 80-r192.168.7.22: 80-m-w1 [root @ localhost/] # ipvsadmin-a-t172.16.16.172: 80-r192.168.7.23: 80-m-w1 [root @ localhost/] # ipvsadmin-a-container: 80-r192.168.7.24: 80-m-w1

In the preceding operation, the option "-a" indicates adding a Real Server, "-t" indicates the VIP address and TCP port, and "-r" indicates the RIP address and TCP port, "-m" indicates that the NAT cluster mode is used (-g DR mode,-I TUN mode), and "-w" indicates that the node is suspended when the weight is 0)

3) view the cluster node status

You can use the "-L" option to view the LVS virtual server in the list. You can specify to view only one VIP address. The option "-n" will display the address, port, and other information in numbers.

[Root @ localhost/] # ipvsadm-L-n // view the node status [root @ localhost/] # ipvsadm-Lnc // view the load connection status

4) Delete server nodes

When you need to delete a node in the server pool, use the option "-d ". The target object, including the Node Address and virtual IP address, must be specified for the delete operation. To delete the entire virtual server, use the option "-D" and specify the virtual IP address. You do not need to specify a node.

[Root @ localhost/] # ipvsadm-d-r192.168.7.24: 80-t172.16.16.172: 80 [root @ localhost/] # ipvsadm-D-t172.16.16.172: 80 // Delete the entire virtual server

5) Save the load distribution policy

Use the export/import tool ipvsadm-save/ipvsadm-restore to save and restore the LVS policy. The operation is similar to the import/export of iptables rules.

[Root @ localhost/] # ipvsadm-save>/etc/sysconfig/ipvsadm // save policy [root @ localhost/] # service ipvsadm stop // stop the service (clear policy) [root @ localhost/] # service ipvsadm start // start the service (load the Storage Policy)

3. Configure NFS shared storage service

NFS is a Network File System Protocol Based on TCP/IP transmission. It was initially developed by SUN.

1. Use NFS to publish shared resources

The implementation of the NFS service depends on the RPC (Remote process call) mechanism to complete the remote-to-Local ing process. In RHEL 6 systems, nfs-utils and rpcbind packages must be installed to provide NFS sharing services. The former uses NFS sharing for publishing and access, and the latter is used for NPC support.

1) install the nfs-utils and rpcbind packages.

[Root @ localhost/] # yum-y install nfs-utils rpcbind [root @ localhost/] # chkconfig nfs on [root @ localhost/] # chkconfig rpcbind on

2) set the shared directory

The NFS configuration file is "/etc/exports", and the file content is empty by default (no sharing ). When you set shared resources in the exports file, the record format is "directory file" client address (permission options ).

12 [root @ localhost/] # vim/etc/exports/var/www/html192.168.7.0/24 (rw, sync, no_root_squash)

The client address can be the host name, IP address, or CIDR block address *,? In the permission options, rw indicates read and write, sync indicates synchronous write, and no_root_squash indicates that the client uses the root permission as the nfsnobody user for downgrading when logging on as the root user.

To share the same directory with different clients and assign different permissions, you only need to specify multiple "Clients (permission options)" separately.

[Root @ localhost/] # vim/etc/exports/var/www/html192.168.7.1 (ro) 192.168.7.10 (rw)

3) Start the NFS shared service program

[Root @ localhost/] # service rpcbind start [root @ localhost/] # service nfs start [root @ localhost/] # netstat-anpt | grep rpcbind

4) view the NFS shared directory released by the Local Machine

[Root @ localhost/] # showmount-e

2. Access NFS shared resources on the client

The NFS protocol is designed to provide a network file system. Therefore, you can use the mount command to mount NFS shared access. The file system type is NFS.

1) install the rpcbind package and enable the rpcbind service.

To access NFS shared resources normally, you must install the rpcbind software package and enable the rpcbind system service. To use the showmount query tool, we recommend that you install the nfs-utils software package together.

[Root @ localhost/] # yum-y install rpcbind nfs-utils [root @ localhost/] # chkconfig rpcbind on [root @ localhost/] # service rpcbind start

2) manually mount the NFS Directory

[Root @ localhost/] # mount192.168.7.250:/var/www/html

After mounting, you only need to access the "/var/www/html" folder of the client, which is actually equivalent to accessing the "/var/www/html" folder on the NFS server.

3) fstab automatic mounting settings

Modify the "/etc/fstab" configuration file and add the NFS shared directory mounting settings. Note that the file system type is set to nfs. We recommend that you add netdev to the mount parameters. If you add soft, the intr parameter can be used for soft mounting and allows you to discard mounting when the network is interrupted. In this way, the client can automatically mount NFS shared resources after each boot.

[Root @ localhost/] # vim/etc/fstab ...... // omit Part of content 192.168.7.250:/var/www/html nfs/var/www/html defaults, _ netdev00

4. Build an LVS Server Load balancer cluster instance

1. Case 1: Build a Server Load balancer cluster in NAT Mode

In a NAT mode cluster, The LVS Server Load balancer is a gateway server for all nodes to access the Internet. Its Internet address is 172.16.16.172 and serves as the VIP address of the entire cluster. The LVS scheduler has two NICs connected to the internal and external networks respectively. The topology is as follows:

For the LVS Server Load balancer, you need to use iptables to configure SNAT forwarding rules for the outbound traffic so that the node server can access the Internet. All the node servers and shared storage are in the private network. The default gateway is the internal and external address of the LVS Server Load balancer (192.168.7.254 ).

1) Configure SNAT forwarding rules

[Root @ localhost/] # vim/etc/sysctl. conf ...... // partial content is omitted. net. ipv4.ip _ forward = 1 [root @ localhost/] # sysctl-p [root @ localhost/] # iptables-t nat-a postrouting-s192.168.7.0/24-o eth0-j SNAT -- to-source172.16.16.172

2) configure a load balancing policy

[Root @ localhost/] # service ipvsadm stop // clear the original policy [root @ localhost/] # ipvsadm-A-t172.16.16.172: 80-s rr [root @ localhost/] # ipvsadm-a-t172.16.16.172: 80-r192.168.7.21: 80-m-w1 [root @ localhost/] # ipvsadm-a-t172.16.16.172: 80-r192.168.7.22: 80-m-w1 [root @ localhost/] # ipvsadm-a-t172.16.16.172: 80-r192.168.7.23: 80-m-w1 [root @ localhost/] # ipvsadm-a-t172.16.16.172: 80-r192.168.7.24: 80-m-w1 [root @ localhost/] # service ipvsadm save // save policy [root @ localhost/] # chkconfig ipvsadm on

3) configure the node Server

All node servers use the same configuration, including the httpd service port and website document content. In fact, the website documents of each node can be stored on the shared storage device, thus eliminating the need for synchronization.

[Root @ localhost/] # yum-y install httpd [root @ localhost/] # mount192.168.7.250: /var/www/html [root @ localhost/] # vim/var/www/html/index.html 4) Test the LVS Cluster

Arrange multiple Test Machines and access http: // 172.16.16.172 from the Internet to view the webpage content provided by the real server. If the webpage of each node is different, the web pages seen by different clients may not be the same. You can use ipvsadm to view the current connection load. (Query on the Scheduler)

[Root @ localhost/] # ipvsadm-Ln

2. Case 2: Build a DR-Mode Server Load balancer Cluster

In a DR-mode cluster, The LVS Server Load balancer acts as the access entry to the cluster but not as a gateway. All nodes in the server pool are connected to the Internet, the WEB response packet sent to the client does not need to pass through the LVS load scheduler.

In this way, the outbound access data is processed separately. Therefore, the LVS Server Load balancer and all node servers must be configured with a VIP address to respond to access to the entire cluster. Considering the security of data storage, shared storage devices are stored in a private network.

1) configure the virtual IP address (VIP) of the Scheduler)

Use the Virtual Interface (eth0: 0) to bind the VIP address to the network adapter eth0 to respond to cluster access. The configuration result is eht0 172.16.16.173/24, and eth0: 0 172.16.16.172/24.

[Root @ localhost/] # cd/etc/sysconfig/network-scripts/[root @ localhost network-scripts] # cp ifcfg-eth0 ifcfg-eth0: 0 [root @ localhost network-scripts] # vim ifcfg-eth0; 0 ...... // omit Part Of The content DEVICE = eht0: 0 ONBOOT = yesIPADDR = 172.16.16.172NETMASK = 255.255.255.0 [root @ localhost network-scripts] # service network restart

2) adjust/proc response parameters

In DR cluster mode, because the LVS Server Load balancer and nodes need public VIP addresses, in order to avoid exceptions in ARP resolution in the network, the Linux kernel's redirect parameter response should be disabled.

[Root @ localhost/] # vim/etc/sysctl. conf ...... // partial content is omitted. net. ipv4.conf. all. send_redirects = 0net. ipv4.conf. default. send_redirects = 0net. ipv4.conf. eth0.send _ redirects = 0 [root @ localhost/] # sysctl-p

3) configure a load balancing policy

[Root @ localhost/] # service ipvsadm stop [root @ localhost/] # ipvsadm-A-t172.16.16.172: 80-s rr [root @ localhost/] # ipvsadm-a-t172.16.16.172: container: 80-g-w1 [root @ localhost/] # ipvsadm-a-t172.16.16.172: 80-r172.16.16.178: 80-g-w1 [root @ localhost/] # ipvsadm-a-t172.16.16.172: release: 80-g-w1 [root @ localhost/] # ipvsadm-a-t172.16.16.172: 80-r172.16.16.180: 80-g-w1 [root @ localhost/] # service restart SADM save [root @ localhost/] # chkconfig restart SADM on

4) configure the virtual IP address (VIP) of the node Server)

On each node server, the VIP address 172.16.16.172 is also required. However, this address is only used as the source address for sending WEB response packets and does not need to listen to client access requests (access requests are monitored by the Scheduler ). Therefore, the Virtual Interface lo: 0 is used to host the VIP address, and a route record is added for the local machine to restrict the data accessing the VIP to the local machine to avoid confusion in communication.

[Root @ localhost/] # cd/etc/sysconfig/network-scripts/[root @ localhost network-scripts] # cp ifcfg-lo: 0 [root @ localhost network-scripts] # vim ifcfg-lo: 0 ...... // omit Part Of The content DEVICE = lo: 0 ONBOOT = yesIPADDR = 172.16.16.172NETMASK = 255.255.255.255 [root @ localhost network-scripts] # service network restart [root @ localhost network-scripts] # vim/etc/rc. local ...... // omit some content/sbin/route add-host172.16.16.172dev lo: 0 [root @ localhost network-scripts] # route add-host172.16.16.172dev lo: 0

5) adjust/proc response parameters

[Root @ localhost/] # vim/etc/sysctl. conf ...... // partial content is omitted. net. ipv4.conf. all. arp_ignore = 1net. ipve. conf. all. arp_announce = 2net. ipv4.conf. default. arp_ignore = 1net. ipve. conf. default. arp_announce = 2net. ipv4.conf. lo. arp_ignore = 1net. ipve. conf. lo. arp_announce = 2 [root @ localhost/] # sysctl-p

6) configure the node Server

[Root @ localhost/] # yum-y install httpd [root @ localhost/] # mount192.168.7.250: /var/www/html [root @ localhost/] # vim/var/www/html/index.html 7) Test the LVS Cluster

Arrange multiple Test Machines and access http: // 172.16.16.172 from the Internet to view the webpage content provided by the real server. If the webpage of each node is different, the web pages seen by different clients may not be the same. You can use ipvsadm to view the current connection load. (Query on the Scheduler)

[Root @ localhost/] # ipvsadm-Ln

This article is from "Deng Qi's Blog" Blog, please be sure to keep this source http://dengqi.blog.51cto.com/5685776/1307880

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.