CentOS 6.4 LVS Server Load balancer VS/NAT mechanism (one master and one slave provide redundancy)

Source: Internet
Author: User
Tags server array haproxy

I. LVS principles

1. The full name of LVS is Linux Virtual Server, that is, Linux Virtual Server. It is an open-source project of Dr. Zhang Wenyu from our country. In linux memory 2.6, it has become a part of the kernel. in earlier versions, the kernel needs to be re-compiled ., LVS is mainly used for multi-server load balancing. It works at the network layer to implement high-performance and high-availability server cluster technology. It is cheap and can combine many low-performance servers to form a super server. It is easy to use, easy to configure, and has a variety of load balancing methods. It is stable and reliable, and does not affect the overall effect even if a server in the cluster fails to work normally. In addition, the scalability is also very good.

 


2. LVS can be divided into three parts:

Load Balancer: this is the core part of LVS. It is like the Controller of the MVC model of our website. It distributes customer requests to different servers in the next layer according to certain algorithms and does not process specific services on its own. In addition, this layer can monitor the status of the next layer. If a server in the next layer cannot work normally, it will automatically remove it and add it after recovery. This layer consists of one or more ctor servers.

Server Array: This layer is responsible for specific services. It can be composed of WEB Server, mail Server, FTP Server, and DNS Server. Note that the Director Server on the upper layer can also be used as the Real server.

Shared Storage: Improves the data on the previous layer and maintains data consistency on the previous layer.

 


3. LVS load balancing mechanism

LVS works at the network layer. Compared with other Server Load balancer solutions, such as DNS domain name rotation resolution, application layer load scheduling, and client scheduling, the efficiency is very high. LVS implements Load Balancing by controlling IP addresses. IPVS is its specific implementation module. IPVS is mainly used to install Director Server and Virtualize an IP address (VIP) for external access on Director Server ). When a user accesses the VIP address and arrives at the Director Server, the Director Server selects a Real Server Based on certain rules, and then returns the data to the client after the processing is complete. These steps cause some specific problems, such as how to select a specific Real Server, and how to return Real Server data to the client. IPVS has three mechanisms:

1. VS/NAT (Virtual Server via Network Address Translation), that is, the Network Address Flip technology implements Virtual servers. When the request arrives, the program processed on the TOR tor server changes the destination address (virtual IP address) in the data packet to a specific Real Server, and the port is also changed to the Real Server port, then, send the packet to the Real Server. After the Real Server processes the data, it needs to return it to the Diretor Server. Then, the Diretor server changes the source address and source port in the data packet to the VIP address and port, and finally sends the data. It can be seen that user requests and responses must go through the TOR tor Server. If there is too much data, the Diretor Server will be overwhelmed.

2. VS/TUN (Virtual Server via IP Tunneling), that is, IP Tunneling technology implements Virtual servers. It is basically the same as VS/NAT, but the Real server directly returns data to the client without going through the TOR tor server, which greatly reduces the pressure on the Diretor server.

3. VS/DR (Virtual Server via Direct Routing), that is, using Direct Routing technology to implement Virtual servers. Compared with the previous two methods, VS/DR uses different packet forwarding methods to rewrite the MAC address of the request message and send the request to the Real Server, the Real Server directly returns the response to the customer, eliminating the overhead of IP tunneling in VS/TUN. This method has the highest performance among the three Load Scheduling Mechanisms, but both Director Server and Real Server must have a network card connected to the same physical network segment.

Ii. Host allocation

1. Host allocation

Note that the following 10.10.54.0 CIDR block is regarded as a public IP address, and 172.16.0.0 is regarded as a private IP address (both are private IP segments. This is only a test, not a real production environment)
The advantage of VS/NAT mechanism is that real server can use private IP addresses, as long as the Direct server is configured with a public IP address.

We plan to configure four machines, two Direct servers (master and backup) and two real servers.

LVS is installed on Direct server to distribute user requests to real server according to certain rules. The external IP address of the entire LVS cluster is VIP: 10.10.54.151.
Direct server -- master
Eth0: 10.10.54.155/24
Eth1: 172.16.50.155/24
VIP: 10.10.54.151

Direct server -- backup
Eth0: 10.10.54.156/24
Eth1: 172.16.50.157/24
VIP: 10.10.54.151

2. real server
Real server1
Ipaddr: 172.16.50.157 gateway 172.16.50.254
Real server2
Ipaddr: 172.16.50.159 gateway 172.16.50.254

3. Working principle:
(1) The user sends a data packet to the VIP: 10.10.54.151. The source IP address of the data packet is the user's Internet IP address, and the destination IP address of the data packet is 10.10.54.151.
(2 ). after the data packet arrives at the Direct server, The LVS installed on the Direct server loads the data packet to a real server Based on the Load Balancing Algorithm. The destination IP address of the data packet is changed to the private IP address of the real server, then send the packet
(3) because the IP address of the Direct server Nic eth1 is in the same network segment as the real server, the Direct server can successfully send this packet.
(4) After the real server processes the data packet, it can only send the response information to the gateway, that is, Direct server.
(5) After the Direct server receives the response information, it changes the source IP address of the response packet to VIP and then sends it to the user.
(6). One-way communication is completed above. In the user's opinion, it is in communication with the VIP.

4. Enable kernel forwarding on two dirdbms servers
Shell> vim/etc/sysctl. conf
Net. ipv4.ip _ forward = 1

Iii. Install LVS on two Direct servers

###### [Master and backup]
1. shell> yum-y install wget libnl * popt * gcc. x86_64 gcc-c ++. x86_64 gcc-objc ++. x86_64 kernel-devel.x86_64 make popt-static.x86_64

2. Download
Shell> wget http://www.keepalived.org/software/keepalived-1.2.9.tar.gz
Shell> wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz

3. Compile and install ipvsadm
Shell> tar xvf ipvsadm-1.26.tar.gz
Shell> cd ipvsadm-1.26
Shell>./configure & make install

Shell> yum install-y net-snmp.x86_64 net-snmp-devel.x86_64
Shell> tar xvf keepalived-1.2.9.tar.gz
Shell> cd keepalived-1.2.9
Shell>./configure -- prefix =/usr/local/keepalived -- enable-snmp -- sysconfdir =/etc
Shell> make & make install
Shell> cp/usr/local/keepalived/sbin/
Shell> cp/usr/local/keepalived/bin/genhash/bin/

###### [Master]
4. Configure keepalived
Shell> vim/etc/keepalived. conf

 

# Keepalived. conf consists of three parts:

Global_defs

Vrrp_instance

Virtual_server

----------------------------------------------------------------

Global_defs {

Notification_email {

Lij@ssr.com

}

Notification_email_from lij@ssr.com

Lij@ssr.com smtp_server

Smtp_connect_timeout 30

Router_id LVS_MASTER2

}

Vrrp_instance VI_1 {

State MASTER # MASTER Device

Interface eth0 # eth0 interface

Virtual_router_id 51

Priority100

Advert_int 1

Authentication {

Auth_type PASS

Auth_pass 1111

}

Virtual_ipaddress {

10.10.54.151/24 dev eth0 label eth0: 1

}

Virtual_ipaddress {

172.16.50.254/24 dev eth1 label eth1: 1

}

}

Virtual_server 10.10.54.151 80 {

Delay_loop 6

Lb_algo rr

Lb_kind NAT

Protocol TCP

Real_server 172.16.50.157 80 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 80

}

}

Real_server 172.16.50.159 80 {

Weight 1

TCP_CHECK {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

Connect_port 80

}

}

}

------------------------------------------------------------

Recommended reading:

Haproxy + Keepalived build Weblogic high-availability server Load balancer Cluster

Keepalived + HAProxy configure high-availability Load Balancing

Haproxy + Keepalived + Apache configuration notes in CentOS 6.3

Haproxy + KeepAlived WEB Cluster on CentOS 6

Haproxy + Keepalived build high-availability Load Balancing

  • 1
  • 2
  • Next Page

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.