Load balancing of KEEPALIVED+LVS+HTTPD

Source: Internet
Author: User

Recently in research load balancing. The KEEPALIVED+LVS model is currently being studied.

1. Software Introduction

Keepalived: As the name implies is to maintain survival, often used to build high-availability equipment, to prevent the business core equipment a single point of failure. The keepalived is primarily used as a health check for Realserver and as a fault drift between the load Balancer host and the backup host.

Single point of failure: in the company's entire business process, a point of failure will lead to the entire system architecture is not available, single point of failure often occurs in the database, core business systems and so on. Our solution to this is to perform high-availability load balancing on core business systems.

Lvs:linux Virtual Server,linux is a virtual server cluster system. There are currently three load balancing techniques (Vs/nat, Vs/tun and VS/DR) and 10 scheduling algorithms (RRR|WRR|LC|WLC|LBLCR|LBLC|DH|SH|SED|NQ).

2, the experimental topology diagram.

In this experiment, we used 4 servers, of which two servers were used to build Keepalived+lvs, and the other two were Web servers providing services to the outside world.

5 IP addresses were used in this experiment.

master:10.68.4.201 backup:10.68.4.58

web1:10.68.4.198 web2:10.68.4.248

virtualip:10.68.4.199

3. Introduction to Topology Diagram

Keepalived--master and Keepalived--backup between the two through the VRRP protocol to communicate with the multicast, master host to accept the request and forward the request to the rear realserver, The backup host accepts requests only and does not forward requests. At some point when the backup host does not accept the master host to send the information, so send VRRP notification information and broadcast ARP information, Xuancheng himself is master, if the other hosts sent notification information priority than their own high, then they will continue to backup, High priority machine, this is the new master host, and replace the original master of the work.

Each keepalived machine monitors the rear realserver, except that the master is responsible for forwarding the external request to the rear of the realserver,backup without the processing.

4, keepalived installation

Keepalived installation Refer to this blog http://my.oschina.net/zyc1016/blog/138574?p=2#comments

Keepalived configuration refer to this blog http://bbs.nanjimao.com/thread-845-1-1.html

During installation, if an error

Checking for IPVS syncd support ... yes
checking for kernel Macvlan Support... no
checking whether So_mark is declared ... no
configure:error:No So_mark declaration in headers

Then you just need to add the--disable-fwmark parameter at compile time. Finally, make sure the following items are yes, and bold must be yes.

use IPVS framework:yes IPVS sync daemon support:yes IPVS use libnl:yes use VRRP framework:yes        

! configuration file for keepalivedglobal_defs {   notification_email  {   }   router_id LVS_DEVEL}vrrp_instance VI_1 {      #定义一个vrrp组, group name unique       state MASTER          #定义改主机为keepalived的master主机     interface eth0        #监控eth0号端口       virtual_router_id 58     #虚拟路由id号为58, ID number unique, this ID determines the MAC address of the multicast     priority 150           #设置本节点的优先级, the priority   of master is higher than the priority of backup, the value is large     advert_int  1          #检查间隔, default is 1 seconds     authentication  {        auth_type pass     #认证方式, password authentication          auth_pass 1111     #认证的密码, this password must be consistent with backup     }     virtual_ipaddress {     #设置虚拟的ip,  This IP is the IP that will be available for later service.         10.68.4.199    }}virtual_server  10.68.4.199 80 {     #虚拟主机设置, IP ibid.     delay_loop 2                    #服务器轮询的时间间隔     lb_algo rr                     # The scheduling algorithm of LVs     lb_kind DR                      #lvs的集群模式     nat_mask  255.255.255.0    persistence_timeout 50         #会话超时50s     protocol TCP                    #健康检查是用tcp还是udp      real_server 10.68.4.248 80 {     #后端真实主机1          weight 100                  #每台机器的权重, 0 means not to forward the request to the machine, knowing it is back to normal.         TCP_CHECK {                  #健康检查项目, the following              connect_timeout 3             nb_get_retry 3             Delay_before_retry 3            connect_port  80        }    }     real_server 10.68.4.198 80 {          #后端真实主机2         weight 100                      #每台机器的权重, 0 means not to forward the request to the machine, knowing it is back to normal.         TCP_CHECK {                      #健康检查项目, the following              connect_timeout 3             nb_get_retry 3             delay_before_retry 3             connect_port  80        }    }} 

The configuration of the backup host is the same as above, only the following places need to be modified.

State BACKUP #定义改主机为keepalived的backup主机, monitoring the main masterpriority #设置本节点的优先级, the value is smaller than the master host

Test the fault drift of the keepalived,

First, start the keepalived at the same time on the 10.68.4.201 and 10.68.4.58, observing the master host and the backup host

You can see that the virtual IP is now bound on 4.201--master, and then stop the keepalived service on 4.201

By looking at the logs on 4.58, you can see that 4.58 has declared itself master. And the virtual IP also drifts to the new Mater machine.

With the above configuration, Keepalived's high-availability features have been implemented.

5. Realserver Server Configuration

       In this example, I installed httpd on 4.198 and 4.248, respectively, to simulate two Web servers. Then to configure the virtual VIP on both servers, I wrote a script here.

#!/bin/bash#description: config realserver lo  and  apply noarpweb_vip =10.68.4.199   #虚拟vip, i.e. virtual ip. /etc/rc.d/init.d/functionscase  "$"  instart)     ifconfig lo:0  $WEB _vip netmask 255.255.255.255 broadcast  $WEB _VIP    /sbin/route add -host  $WEB _vip dev lo:0   echo  "1"  >/proc/sys/net/ipv4/conf/lo/arp_ignore   echo  "2"  >/proc/sys/net/ipv4/ conf/lo/arp_announce   echo  "1"  >/proc/sys/net/ipv4/conf/all/arp_ignore    echo  "2"  >/proc/sys/net/ipv4/conf/all/arp_announce   echo  "1"  > /proc/sys/net/ipv4/conf/eth0/arp_ignore   echo  "2"  > /proc/ sys/net/ipv4/conf/eth0/arp_announce   echo  "1"  > /proc/sys/net/ipv4/conf/ Default/arp_ignore   echo  "2"  > /proc/sys/net/ipv4/conf/default/arp_announce   sysctl -p  >/dev/null 2>&1   echo  "Realserver start ok"     ;; Stop)    ifconfig lo:0 down   route del  $WEB _vip >/ dev/null 2>&1   echo  "0"  >/proc/sys/net/ipv4/conf/lo/arp_ignore    echo  "0"  >/proc/sys/net/ipv4/conf/lo/arp_announce   echo  "0"  >/proc/sys/net/ipv4/conf/all/arp_ignore   echo  "0"  >/proc/sys/net/ipv4/ conf/all/arp_announce   echo  "0"  > /proc/sys/net/ipv4/conf/eth0/arp_ignore    echo  "0"  > /proc/sys/net/ipv4/conf/eth0/arp_announce   echo   "0"  > /proc/sys/net/ipv4/conf/default/arp_ignore   echo  "0"  >  /proc/sys/net/ipv4/conf/default/arp_announce   echo  "realserver stoped"    ;; Status)        # status of lvs-dr real server.        islothere= '/sbin/ifconfig lo:0 | grep  $WEB _vip '        isrothere= ' netstat -rn | grep  "lo:0"  |  grep  $WEB _vip '        if [ !  $islothere  -o  !  "Isrothere"  ];then         # Either  the route or the lo:0 device         #  not found.         echo  "lvs-dr real  server stopped. "        else         echo  "Lvs-dr runNing. "        fi;; *)        # Invalid entry.        echo  "$0: usage: $0 {start|status|stop}"         exit 1;; Esacexit 0

This script is executed on 4.198 and 4.248, respectively. At this point, we can see the same information below on the two keepalived hosts.

At this time, we access the 10.68.4.199 through the Web page, when the Ipvsadm query to the Realserver host order, we can see that at this time the priority to access 4.198.

And when we will 4.198 this machine httpd service stop, first can see two keepalived host will be 4.198 this machine culling, at this time we again query when Ipvsadm, only can see a Web server, again page access, this is in access to 4.248 this machine.

And when we fix 4.198 machines, keepalived will automatically add the server.

From the above test, the load balancing function of LVS has been realized.

6. Problems I have encountered

The page has any display, reasoning just can't find the problem where?  Later from the forum to see a netizen also appeared the same problem, wise under the cause this problem arises because we keepalived.conf file writing format problems, back I went back to check the next, found less write a "}" The keepalived.conf configuration file syntax format is not detected when Keepalived is started. So in the future to configure the writing, we must pay extra attention to AH.

Keepalived+lvs+httpd Load Balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.