LVS+KEEPALIVED,RABBITMQ high-availability load balancing

Source: Internet
Author: User
Tags rabbitmq

Recently the team is ready to refactor the project, using RABBITMQ and several scenarios, with a focus on the upcoming scenario in the project. About RABBITMQ is not here detailed explanation, the specific view RABBITMQ Chinese manual. See the architecture diagram directly:

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/6F/65/wKioL1WbU4KDmHxsAAEkMsBP5vA283.jpg "title=" RABBITMQ high-availability load Balancing "alt=" Wkiol1wbu4kdmhxsaaekmsbp5va283.jpg "/>


: The front end uses Keepalived+lvs for high-availability load balancing, and RabbitMQ HA queues (mirror queues) for message queue structure replication. In this scenario, two nodes are built, and all are disk nodes (all node states are consistent and nodes are fully equal), so long as any one node can work, the RabbitMQ cluster can provide services to the outside. The task processing process monitors each too RABBITMQ node (each node deploys a task processing module accordingly). In this way, each task processing module only needs to deal with the tasks that the Rabbitmq-server accepts, and the two task processing modules function exactly the same, but do not affect each other; when one of the RABBITMQ goes down, the corresponding task processing process stops. Does not affect the other node to work properly.


This example environment:

rabbitmq1:192.168.1.121 Hostname:initiator

rabbitmq2:192.168.1.114 Hostname:mygateway

Keepalived (Master) +lvs:192.168.1.121

Keepalived (Backup) +lvs:192.168.1.114

vip:192.168.1.120


To configure the RABBITMQ cluster:

Modify the/etc/hosts file of two nodes to ensure that it can be communicated through hostname.

[Email protected]~]# cat/etc/hosts192.168.1.121 initiator192.168.1.114 Mygateway

Two node installation RABBITMQ:

Yum-y Install Rabbitmq-server

Configure the RABBITMQ cookie on the two nodes to set the/var/lib/rabbitmq/.erlang.cookie content on the two nodes to the same one. Note the permission is: 400, Owner: RABBITMQ:RABBITMQ

[Email protected] ~]# Cat/var/lib/rabbitmq/.erlang.cookie

Two node boot RABBITMQ:

[Email protected] ~]# rabbitmq-server-detached

To perform an operation on any of these nodes, configure the RABBITMQ cluster:


[Email protected] ~]# rabbitmqctl stop_app[[email protected] ~]# rabbitmqctl join_cluster--ram [email protected] #创建ram类 type of cluster [[email protected] ~]# Rabbitmqctl Start_app

To view the RABBITMQ cluster status:

[Email protected] ~]# rabbitmqctl cluster_statuscluster Status of node [email protected] ... [{nodes,[{disc,[[email protected],[email protected]}]}, {Running_nodes,[[email protected],[email protected]}, { Partitions,[]}]...done.

Set up a mirroring queue policy, one node execution:

[[email protected] ~]# rabbitmqctl set_policy ha-all "^" ' {"Ha-mode": "All"} '

All queues are set to the mirror queue, that is, the queues are replicated to each node, and the state of each node remains consistent.

Now that the RABBITMQ configuration is complete, you can execute commands on any one node to see if the two nodes are consistent:

[Email protected] ~]# rabbitmqctl add_vhost testclustercreating vhost "Testcluster" ... done. [Email protected] ~]# rabbitmqctl list_vhostslisting vhosts .../testcluster...done. [Email protected] ~]# rabbitmqctl list_vhostslisting vhosts .../testcluster...done.

You can see that the virtual machine that was created is replicated for another node, and the two node States remain consistent. For more command-line operations, see Rabbitmqctl--help


Configuration Keepalived+lvs:


LVS (Linux virtual Server), Linux virtualized servers. Currently accepted by the Linux kernel, the kernel module name is Ip_vs, from the Linux kernel version 2.6, IP_VS code has been integrated into the kernel, so long as the kernel is compiled to choose the Ipvs function, your Linux can support LVS. Linux 2.4.23 later kernel version also integrates IP_VS code, but if the older kernel version, you need to manually integrate IP_VS code into the kernel source code, and recompile the kernel room can use LVS. In Linux, the user can not directly manipulate the kernel module, so Ip_vs also has the corresponding user space program Ipvsadm, the user may directly use the Ipvsadm to configure the LVS.

Keepalived is a mechanism specifically for LVS to provide high-availability features, which can be implemented when there are two master and slave LVs, and the main LVs is damaged, the IP address and LVs are transferred to the backup LVS. Its high availability is primarily based on the VRRP protocol, and VRRP is an "election" protocol that dynamically assigns the responsibility of a virtual router to other routers in the same VRRP group, eliminating the single point of failure of a static routing configuration. If a VRRP device takes the virtual router IP address as the real interface address, the device is called the IP address owner. If the IP address owner is available, it will usually become master.

In this example, LVs and keepalived collocation, so the LVS configuration is relatively simple, the specific addition of realserver operation keepalived in the configuration file on behalf of the operation. LVS only need to bind virtual IP, LVS script:

[[email protected] ~]# cat /etc/rc.d/init.d/realserver.sh #!/bin/bash  #  description: config realserver lo and apply noarp  sns_vip= 192.168.1.120. /etc/rc.d/init.d/functionscase  "$"  instart)    ifconfig lo:0   $SNS _vip netmask 255.255.255.255 broadcast  $SNS _vip   /sbin/route  add -host  $SNS _vip dev lo:0   echo  "1"  >/proc/sys/net/ ipv4/conf/lo/arp_ignore   echo  "2"  >/proc/sys/net/ipv4/conf/lo/arp_announce    echo  "1"  >/proc/sys/net/ipv4/conf/all/arp_ignore   echo  "2"   >/proc/sys/net/ipv4/conf/all/arp_announce   sysctl -p >/dev/null 2> &1   echo  "Realserver start ok"      ;; Stop)    ifconfig lo:0 down   route del  $SNS _vip >/dev/null 2>&1    echo  "0"  >/proc/sys/net/ipv4/conf/lo/arp_ignore   echo  "0"  >/proc/sys /net/ipv4/conf/lo/arp_announce   echo  "0"  >/proc/sys/net/ipv4/conf/all/arp_ignore    echo  "0"  >/proc/sys/net/ipv4/conf/all/arp_announce   echo  " Realserver stoped "     ;; *)    echo  "Usage: $0 {start|stop}"      exit  1esacexit 0

Two nodes to keep the content consistent, run realserver.sh:

[Email protected] ~]# chmod u+x/etc/rc.d/init.d/realserver.sh[[email protected] ~]#/etc/rc.d/init.d/realserver.sh Start

To see if the VIP is successfully bound to the LO:

[[Email protected] ~]# IP add1:lo: <LOOPBACK,UP,LOWER_UP> MTU 16436 qdisc noqueue State UNKNOWN Link/loopback 00:00 : 00:00:00:00 BRD 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet 192.168.1.120/32 BRD 192.168.1.120 Scope Global lo:0i Net6:: 1/128 Scope Host

All two nodes perform the above operations.

Two node installation keepalived:

[Email protected] ~]# yum-y install keepalived

To modify the keepalived configuration file on the primary node:

[[Email protected] ~]# cat /etc/keepalived/keepalived.conf! configuration file  for keepalivedglobal_defs {   notification_email { [email  protected] [email protected] [email protected]   }    notification_email_from [email protected]   !smtp_server 192.168.200.1    smtp_connect_timeout 30   router_id lvs_devel  persistence_ timeout 7200}!vrrp_script chk_port {  #定义检测脚本   here! To comment out!   script  " /opt/checkrabbitmq.sh "  #脚本具体路径!   interval 2  #检测频率!   weight  2!} vrrp_instance vi_1 {state master  #备节点上修改为BACKUPinterface  eth1 # Change the NIC name to native virtual_router_id 52priority 100  #备节点将此值修改为低于100即可, priority advert_int  as appropriate 1authentication {auth_type pass  #备节点与之一致auth_pass 1111  #备节点与之一致}!track_script {!   chk_port  #调用检测脚本!} virtual_ipaddress {192.168.1.120/32 dev eth1 label eth1:0  #将VIP绑定到网卡上}}virtual_ server 192.168.1.120 5672 {  #配置虚拟IP实例delay_loop  6lb_algo rr  #lvs调度算法, This is the polling lb_kind dr  #LVS模型, where the DR model is used, and the direct routing #nat_mask 255.255.255.0#persistence_timeout  50protocol tcpreal_server 192.168.1.121 5672 {#后端的realserver, that is, really running rabbitmqd  host IP   and Port weight 2tcp_check {  #rabbitMQ端口检测connect_timeout  3nb_get_retry 3delay_before _retry 3connect_port 5672}}real_server 192.168.1.114 5672 {  #第二个rabbitMQ节点weight  2TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port  5672}}}

Start keepalived on two nodes:

[[email protected] ~]#/etc/init.d/keepalived start

On the master node, check to see if the VIP is bound to the Ethn NIC on the master node:

[[email protected] ~]# ip add1: lo: <loopback,up,lower_up> mtu  16436 qdisc noqueue state unknown link/loopback 00:00:00:00:00:00 brd  00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet 192.168.1.120/32 brd  192.168.1.120 scope global lo:0inet6 ::1/128 scope host     Valid_lft forever preferred_lft forever2: eth1: <broadcast,multicast,up,lower_up > mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether  00:50:56:27:b0:80 brd ff:ff:ff:ff:ff:ffinet 192.168.1.121/24 brd 192.168.1.255  SCOPE&NBSP;GLOBAL&NBSP;ETH1INET&NBSP;192.168.1.120/32&NBSP;SCOPE&NBSP;GLOBAL&NBSP;ETH1:0INET6&NBSP;FE80: : 250:56ff:fe27:b080/64 scope link    valid_lft forever preferred_lft  forever

As you can see, the VIP has been successfully bound to the eth1.

To view keepalived logs:

[[email protected] ~]# tail/var/log/messages[[email protected]~]# ipvsadm-ln # view LVS status, if there is no tool yum-y install Ipvsadm can be

Specific actions performed after the RABBITMQ health check are added in the Real_server section:

Notify_up $PATH/script.sh #检测到服务开启后执行的脚本 can be a mail alarm, such as a ip,rabbitmq hung off .... Notify_down $PATH/script.sh #检测到服务停止后执行的脚本.

In real-world applications, backup will have resources when Master is dead. However, when master resumes, it will preempt the resource and continue to be the master, and bind the VIP to the master host. The business you are connecting to may be interrupted at this time. Therefore, the production needs to be set to not preemption (nopreempt) resources, that is, it will not be able to take the Lord back after life, continue to exist as a standby machine. However, the nopreempt can only be set when stat is backup, so the stat on the main standby should be set to backup, the priority is set to a high of low, to determine who is the master.

Make simple changes to the keeplived:

The state backup# is modified to backupvirtual_router_id 60# the default 51 master-Slave is modified to 60priority 100# priority (between 1-254) and the other one to 90, and the standby node must be lower priority than the primary node. Nopreempt #不抢占资源, meaning that it will not take the Lord back after he is alive, the standby machine does not need to set a change

The client connection 192.168.1.120:5672 can be used as a queue operation as with a single rabbitmq, and LVS polls the client's link to two realserver. In the production environment, to set the KEEPALIVED,RABBITMQ, LVS boot script to boot.

This article is from the "Diannaowa" blog, make sure to keep this source http://diannaowa.blog.51cto.com/3219919/1671623

LVS+KEEPALIVED,RABBITMQ high-availability load balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.