Lvs+ipvsadm+keepalive Load Balancer Installation Deployment

Source: Internet
Author: User
Tags install openssl

LVS Chinese site http://zh.linuxvirtualserver.org/

I. Introduction of LVS

Lvs:linux virtual server abbreviation, meaning is a virtual servers Linux, is a virtualized server cluster system.

LVS is an open source software, founded in May 1998 by Dr. Zhangwensong, a graduate of the National University of Defense technology, to achieve simple load balancing under the Linux platform

Purpose:

Implement a high-performance, high-availability server using cluster technology and Linux operating system.

Very good scalability (Scalability)

Very good reliability (reliability)

Good manageability (manageability)

second, the principle of technology :

The LVS cluster uses "IP load Balancing Technology" and "content-based request distribution technology". The scheduler has a good throughput rate, transfers the request evenly to the different server execution, and the scheduler automatically shields off the server's failure, thereby forming a set of servers into a high-performance, highly available virtual server. The structure of the entire server cluster is transparent to the customer and eliminates the need to modify client and server-side programs. To do this, you need to consider system transparency, scalability, high availability, and manageability at design time.

650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M01/80/4B/wKioL1c9eDvBFVmAAAB49gJNA28736.png "title=" 00.png "alt=" Wkiol1c9edvbfvmaaab49gjna28736.png "/>

The LVS cluster adopts three-layer structure and its main components are:

1) Load Scheduler (Balancer), located at the forefront of the entire cluster system, consists of one or more load schedulers (Director server) responsible for sending customer requests to a set of servers for execution. And the customer thinks the service comes from an IP address (which we can call a virtual IP address). 2) server pool, a set of servers that really perform customer requests (real server), with Web, MAIL, FTP, and DNS. 3) shared storage (Gkfx Storage), which provides a shared storage area for a server pool, which makes it easy to have the same content for the server pool and provide the same service.

Three, the characteristics of the LVS cluster

1. IP Load Balancing Technology

Load balancing technology has many implementations, there are methods based on DNS domain name rotation, a method based on client scheduling access, a scheduling method based on application layer system load, and a scheduling method based on IP address, in which the most efficient implementation is IP load balancing technology.

The IP load balancing technology of LVS is realized by Ipvs module, Ipvs is the core software of LVS cluster system, its main function is:

Installed on the director server, and a virtual IP address is created on the director server, the user must access the service through this virtual IP address. This virtual IP is generally called the LVS VIP, namely virtual IP. The requests that are accessed first go through the VIP to the load scheduler, and then the load Scheduler picks a service node from the real server list to respond to the user's request.

When the user's request arrives at the load scheduler, how the scheduler sends the request to the real server node that provides the service, and how the real server node returns the data to the user, is the key technology implemented by Ipvs, and there are three kinds of load balancing mechanisms Ipvs, namely NAT, Tun, and Dr, Detailed below

CIP client computer ' s IP address: Public IP, IP used by client. VIP Virtual IP address:director IP addresses used to provide services to clients RIP real IP address: The IP address used by the cluster node (the server in the background to actually serve the service), DIP Director ' s IP address:d Irector the address used to contact D/rip

a) Vs/nat: that is (Virtual Server via Network Address translation)

Through the NAT network address translation technology to implement the virtual server, when the user requests to reach the scheduler, the scheduler will request the destination address of the message (that is, the virtual IP address) to the selected real server address, while the destination port of the message is also changed to the corresponding port of the selected real server, Finally, the message request is sent to the selected real Server. After the data is obtained on the server side, when Real server returns the data to the user, it needs to go through the load scheduler again to change the source address and source port of the message to the virtual IP address and the corresponding port, then send the data to the user to complete the load scheduling process.

As can be seen, in the NAT mode, the user request and response messages must be rewritten by the Director server address, when the user requests more and more time, the scheduler's processing power will be called bottlenecks.

Principle Brief:

Destination address translation, all client requests are directed to the real Server in the background by the Director based on access requests and algorithms.

Packet Address translation process:

Characteristics:

Director and real server must be in the same network segment;

In general, RIP is a private address and is used only for intra-cluster communication between nodes;

The Director responds to all requests between the client and the real server, assuming a large load;

All real IP gateways must point to dips in response to client requests;

The director can remap network ports, i.e. the front end uses standard ports, and the backend can use non-standard ports;

Real servers in the background can use any operating system;

The director may become a system bottleneck.

b) Vs/tun: i.e. (Virtual Server via IP tunneling)

Virtual server is implemented by IP tunneling technology. Its connection scheduling and management is the same as the Vs/nat way, but its message forwarding method is different, Vs/tun mode, the Scheduler uses IP tunneling technology to forward user requests to a real server, and this real server will directly respond to the user's request, no longer through the front-end scheduler, In addition, there is no requirement for the GEO location of the real server, either in the same network segment as the director server or as a standalone network. Therefore, in the Tun mode, the scheduler will only process the user's message request, the throughput of the cluster system is greatly improved.

Packet Transfer process:

Characteristics:

As with the network structure of DR, but director and real server can be in different networks, the ability to implement disaster-tolerant remote. DIP-->VIP is tunneled-based, and the S:dip D:rip address is additionally encapsulated in the packet envelope.

Director and Real Server must be in the same physical network;

RIP must not be a private address;

The Director is only responsible for processing incoming packets;

Real server returns the packet directly to the client, so the real server default gateway cannot be a dip and must be the address of a router on the public network;

Director cannot do port remapping;

Only the operating system that supports the tunneling Protocol can act as a real Server.

c) vs/dr: i.e. (Virtual Server via Direct Routing)

Virtual servers are implemented through direct routing technology. Its connection scheduling and management is the same as in Vs/nat and Vs/tun, but its message forwarding method is different, vs/dr by overwriting the request message's MAC address, send the request to real server, and real server to return the response directly to the customer, eliminating the vs/ The IP tunneling overhead in the Tun.

This is the best performance in three load scheduling mechanisms, but it must be required that both the Director server and the real server have a NIC attached to the same physical network segment.

Packet Transfer process:

Characteristics:

The VIP must be configured on the Real server to be hidden, and the VIP is used as the source address only in response to client requests, and this VIP is not used in addition.

The cluster node and director must be in the same network;

RIP is not required for private addresses;

The director handles all incoming requests only;

Real Server can not use dip as a gateway, but a router on the public network as a gateway;

The Director can no longer use port remapping;

Most operating systems can be used except as real server,windows;

The LVS-DR mode can handle more requests than lvs-nat.

One of the most commonly used methods in the actual production environment, advantages:

RIP is the public address, the administrator can remotely connect to real server to view the working status;

Once the director is down, you can point a record to rip to continue to provide services by modifying the DNS records;

2. Load Scheduling algorithm

As we mentioned above, the load scheduler is based on the load situation of each server, dynamically select a real server to respond to user requests, then the dynamic selection is how to implement, in fact, we are here to say the load scheduling algorithm, according to different network service requirements and server configuration, Ipvs implements the following eight kinds of load scheduling algorithms, here we detail the most commonly used four scheduling algorithms, the remaining four scheduling algorithms please refer to other information.

A)RR Polling Schedule (Round Robin)

"Polling" scheduling is also called 1:1 scheduling, the scheduler through the "polling" scheduling algorithm to the external user request in order 1:1 to each real server in the cluster, the algorithm treats each real server equally, regardless of the actual load status and connection status on the server.

b)WRR weighted Polling schedule (Weighted Round Robin)

The "weighted polling" scheduling algorithm dispatches access requests based on the different processing capabilities of real server. You can set different scheduling weights for each real server, and for a relatively good real server, you can set a higher weight, and for a less powerful real server, you can set a lower weight value, which ensures that the processing power of the server handles more traffic. The server resources are utilized fully and rationally. At the same time, the scheduler can automatically query the real server load situation, and dynamically adjust its weight value.

c)LC Minimum link scheduling (Least Connections)

The "least connection" scheduling algorithm dynamically dispatches network requests to the server with the fewest number of established links. If the real server of the cluster system has similar system performance, the "Minimum connection" scheduling algorithm can be used to balance the load well.

d)WLC weighted minimum link scheduling (Weighted Least Connections)

"Weighted least link scheduling" is a superset of "least connection scheduling", each service node can use the corresponding weights to represent its processing power, and the system administrator can dynamically set the corresponding weights, the default weight is 1, the weighted minimum connection scheduling when allocating new connection requests as far as possible to make the service node's established connection number and its weight is proportional.

e)WRR weighted polling (Weighted Round-robin): Assigns a weight/rank to each real server, the larger the weight, the more requests are assigned.

f)DH target hash (Destination hashing): Requests from the same IP address are redirected to the same real server (to ensure that the destination address is not changed)

g)SH Source Address hash (source hashing): The director must ensure that the response packet must pass through the router or firewall that the packet is requesting (the source address is guaranteed to be the same)

h)SED Shortest expected delay (short starved him expected delay): Not considering the number of inactive connections

Iv. Installation and Deployment

1. Basic Environment

# yum install OpenSSL openssl-devel openssh-clients gcc libnl* popt*

The Linux kernel is larger than version 2.6, and the LVS feature is supported by default

You can check if the kernel has supported the Ipvs module for LVS with the following command

# modprobe-l |grep ipvs/lib/modules/2.6.9-42.elsmp/kernel/net/ipv4/ipvs/ip_vs_rr.ko/lib/modules/2.6.9-42.elsmp/ Kernel/net/ipv4/ipvs/ip_vs_sh.ko

If there is an output similar to the above, the system kernel already supports the Ipvs module by default. Then you can install the Ipvs management software.

2, on the director serve source installation Ipvs management software

Download Ipvs management software Ipvsadm http://www.linuxvirtualserver.org/software/ipvs.html

# tar zxvf ipvsadm-1.24.tar.gz# cd ipvsadm-1.24# make && make install# ipvsadm--help modify configuration file/etc/rc.d/init.d/ipvsa dm# Boot LVS Service # Ipvsadm

Ipvsadm common Syntax and format:

ipvsadm -a| E -t|u|f virutal-service-address:port [-s scheduler] [-p [timeout]] [-m  netmask]ipvsadm -d -t|u|f virtual-service-addressipvsadm -cipvsadm -ripvsadm  -S [-n]ipvsadm -a|e -t|u|f virtual-service-address:port -r  Real-server-address:port[-g|i|m] [-w weight]ipvsadm -d -t|u|f virtual-service-address  -r real-server-addressipvsadm -L|l [options]ipvsadm -Z [-t|u|f  Virtual-service-address]ipvsadm --set tcp tcpfin udpipvsadm –h-a --add-service    Add a new virtual IP record to the list of virtual servers in the kernel. That is, add a new virtual Server-e --edit-service   edit a virtual server record in the list of kernel virtual servers-d --delete-service   Remove a virtual server record from the list of kernel virtual servers-c --clear  clears all records in the kernel virtual server list-r --restore   recover virtual Server rules-s -- save   Save the Virtual server rule, output the-r  option in a readable format-a --add-server   add a new day to a record in the kernel virtual server list real Services Records. That is, a new realserver-e --edit-server   is added to a virtual server to edit a real server record in a virtual server record-d -- delete-server   Delete a real server record in a virtual server record-l --list   display the list of virtual servers in the kernel   -l  --timeout   Displays the timeout value for "tcp tcpfin udp", such as: ipvsadm -l --timeout  - l --daemon   shows the synchronization daemon status, such as:ipvsadm -l –daemon  -l --stats   Displays statistics, such as:ipvsadm -l –stats  -l --rate   display rate information, such as:ipvsadm -l   --rate  -L --sort   sort output on virtual servers and real servers, for example: ipvsadm -l --sort-z -- zero   Virtual Server List counter Clear 0   Clear the current number of connections--set tcp tcpfin udp   set the connection time-out value-t    Description The virtual server provides the tcp  service, which is followed by the following format:  [virtual-service-address:port] or [real-server-ip:port ]-u   explains that the virtual server provides the udp  service, which is followed by the following format:  [virtual-service-address:port] or [ real-server-ip:port]-f  fwmark   description is a iptables marked service type-s   This option is followed by the LVS using the scheduling algorithm   There are several options:  rr|wrr|lc|wlc| Lblc|lblcr|dh|sh, the default algorithm is the continuous service time of wlc-p timeout   on a real server. This means that multiple requests from the same user will be processed by the same real server. This parameter is typically used for operations with dynamic requests where the default value of,timeout  is 300  seconds. For example:-p 600, which indicates a continuous service time of 600 seconds. -r  Specifies the IP address of the Real server, which is followed by the following format: [real-server-ip:port]-g --gatewaying   Specifies that the LVS work mode is the direct route mode Dr (this mode is lvs  default operating mode)-i -ipip   specifies lvs  mode of operation for tunnel mode tun-m -- masquerading   Specifies the lvs  's operating mode for NAT mode-w --weight weight   specifies the weight of the Real server-C  --connection   Display LVS Current connection information   such as: IPVSADM -LNC

3, Installation keepalived

Yum Install Daemontar zxvf CD keepalived-1.2.13.tar.gz./configuremake && make installcp/usr/local/etc/rc.d/ init.d/keepalived/etc/rc.d/init.d/# startup script mkdir-pv/etc/keepalivedcp/usr/local/sbin/keepalived/usr/sbin/# Modify Master profile/ Etc/keepalived/keepalived.confservice keepalived Start keepalived There will be three processes: Parent process: Memory management, child process management subprocess: VRRP child process subprocess: Healt Hchecker Child process # boot from chkconfig--add keepalivedchkconfig--level keepalived on

Master configuration file Details keepalived.conf

#  Description # vip 101.251.96.136 10.10.10.229# real server1 101.251.96.141  10.10.10.201# real server2 101.251.96.139 10.10.10.229# ! configuration  file for keepalived #  Global Definition Section global_defs {  notification_email {      [email protected] #  set the alarm email address, you can set multiple, one per line. Note that if you want to turn on mail alerts, you need to turn on the SendMail service on this computer. [email protected]  }  notification_email_from [email protected]   #  set the send address of the message   smtp_server 127.0.0.1  #  set Smtp server address   smtp_ connect_timeout 30  #  Setting the connection SMTP server time-out   router_id lvs_devel  #   Run an identity for the keepalived server. Information that appears in the message header when you send an e-mail} vrrp_instance vi_1 { #  definition extranet   state MASTER   #  specifies the role of keepalived, master indicates that this host is the primary server, and backup represents a standby server   interface eth1  #  Specifies the interface   virtual_router_id 1  #  virtual route identifier of the HA monitoring network, which is a number, and the same VRRP instance uses a unique identity. That is, master and backup must be consistent under the same vrrp_instance.   priority 101    #  define priority, the higher the number, the higher the priority, under one vrrp_instance, The priority of master must be greater than the priority of backup   advert_int 1    #  Set the interval between master and backup load balancer synchronization check in seconds   authentication  #  set authentication type and password   {     auth_type PASS  #  set authentication type, mainly have PASS and ah two kinds of     auth_pass  xde.146_5%DJYP  #  Set the authentication password, under a vrrp_instance, master and backup must use the same password to communicate properly   }   virtual_ipaddress  #  set a virtual IP address, you can set multiple virtual IP addresses, one per line   {     101.251.96.136  #  Virtual External network ip  }}#  Virtual Server Definition Section virtual_server   101.251.96.136 80  #  setting up a virtual server, you need to specify a virtual IP address and a service port, separated by a space between IP and port {  delay_loop 2    #  Set Health Check time, unit is seconds   lb_algo wrr    #  set the load scheduling algorithm, here is set to WRR, that is, the least link scheduling   lb_kind dr      #  set load-balancing forwarding rules with NAT, Tun, and Dr three modes selectable   persistence_timeout 0   #  session hold time, in seconds, this option is useful for dynamic Web pages and provides a good solution for session sharing in a clustered system. With this session hold feature, the user's request is distributed to a service node until it exceeds the session hold time. It is important to note that this session hold time is the maximum no response timeout, that is, when the user is working on a dynamic page, if no action is taken within 50 seconds, then the next operation will be distributed to the other node, but if the dynamic page has been operating, it is not subject to 50 seconds of time limit    protocol TCP #  Specify the type of forwarding protocol, with TCP and UDP two types   nat_mask 255.255.255.240    gateway  101.251.96.129    real_server 101.251.96.141 80    #  Configure Service Node 1, you need to specify the real IP address and port of the real server, separated by a space between IP and port   {     weight 1   #  set the weight, the weight value is expressed in numbers, the higher the number, the higher the weight, the size of the set weight can be different performance of the server to assign different load, can be high performance of the server set a higher weight value, The lower weight is set for the low performance server, so it is reasonable to use and allocate the state detection setting part of the system resource     tcp_check  # realserve, Unit is Seconds    &nbsP {  connect_timeout 3   # 3 seconds no response timeout       nb_get_ retry 3      #  Retry Count       delay_before_ retry 3  #  Retry Interval     }  }  real_server   101.251.96.139 80   #  Configuring the Service node 2  {    weight 1     TCP_CHECK {      connect_timeout 3       nb_get_retry 3      delay_before_retry 3     }  }}#  The following is the definition of the intranet IP/VIP and its real server, similar to the above, but the IP has made the corresponding changes vrrp_instance vi_2  { #  Define intranet   state backup  interface eth0  virtual_router _id 1  priority 100  advert_int 1  authentication {    &nbSp;auth_type pass    auth_pass xde.146_5%djyp  }  virtual_ ipaddress {    10.10.10.229  }}virtual_server  10.10.10.229  80{  delay_loop 2  lb_algo wrr  lb_kind dr  nat_mask  255.255.255.0  gateway  10.10.10.1  persistence_timeout 0   Protocol tcp  real_server  10.10.10.201 80 {    weight  1    tcp_check {      connect_timeout 3       nb_get_retry 3      delay_before_retry  3    }  }  real_server  10.10.10.209 80 {     weight 1    TCP_CHECK {       connect_timeout 3      nb_get_retry 3      delay_before_retry  3    }  }}

When configuring keepalived.conf, you need to pay special attention to the syntax format of the configuration file, because Keepalived does not detect the correctness of the configuration file at startup, even if there is no configuration file, keepalived can still start up, so the configuration file must be correct.

By default, keepalived will look for the/etc/keepalived/keepalived.conf profile when it starts, and if your profile is placed under a different path, you can use the "keepalived-f" parameter specifies the path to the configuration file you are in.

Once the keepalived.conf is configured, copy the file to the Alternate Director server path, and then make two simple modifications:

Change "state MASTER" to "state BACKUP"

Change priority 100 to a smaller value, which is changed to "priority 80"

Finally, the real server node of the cluster is also configured to communicate with the Director server and ignore ARP, and the contents of the script are described in the previous article and are not explained here.

This article is from "Walker--->" blog, please be sure to keep this source http://liumissyou.blog.51cto.com/4828343/1775079

Lvs+ipvsadm+keepalive Load Balancer Installation Deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.