Use ldirectord to monitor backend RS Health Status and implement LVS Scheduling

Source: Internet
Author: User

Ldirectord function description:

If ipvsadm is directly defined as a resource proxy in highly available services and ipvsadm is used to generate ipvs rules, the effective LVS cannot implement the health monitoring function on the backend RealServer, the ldirectord in heartbeat can monitor the health status of the backend RealServer. At the same time, it can use the ipvs function in the kernel to start the LVS service using ipvsadm rules to implement the scheduling function for the backend RealServer, when there is a request to the front-end ctor, You can forward the request to the RealServer and the defined ipvsadm rule is saved in the configuration file. The service does not need to be started, the resource proxy ldirectord is used to start the ipvs service. At the same time, the front-end ctor Ctor is configured with a VIP address, which is used as a highly available resource to realize the transfer of the front-end director.


Lab Host:

Director1: 172.16.103.1

Director2: 172.16.103.2

Realserver1: 172.16.103.3

Realserver2: 172.16.103.4


Procedure:

1. The front-end two hosts are installed with heartbeat, and the synchronization time is required. The communication between the two ctor Ctor is based on the/etc/hosts file for host name resolution and SSH-based public key communication. For more information about these configurations, see the previous blog.

2. Install ldirectord on two ctor s:

# yum install heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm

After installation, a file generated by ldirectord is stored in the HA. d/resource. d directory of heartbeat, which is a resource proxy file. At the same time, a sample configuration file is provided. These files are the configuration interface files used to implement high-availability ipvs rules and define backend RealServer.

# rpm -ql heartbeat-ldirectord/etc/ha.d/resource.d/ldirectord/etc/init.d/ldirectord/etc/logrotate.d/ldirectord/usr/sbin/ldirectord/usr/share/doc/heartbeat-ldirectord-2.1.4/usr/share/doc/heartbeat-ldirectord-2.1.4/COPYING/usr/share/doc/heartbeat-ldirectord-2.1.4/README/usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf/usr/share/man/man8/ldirectord.8.gz

Copy the configuration file of ldirector to the/etc/directory.

# cp /usr/share/doc/heartbeat-ldirectord-2.1.4/ldirectord.cf /etc/ha.d

Edit the configuration file of ldirectord and define the IP address of the backend RealServer to be used and the LVS under which model to use. The gate is the Dr model, in addition, when the backend RealServer fails, the fallback server is defined here to display the prompt information, prompting that the current service is unavailable, so as to remind users, the following request and other entries are used to check the health status of the backend RealServer. At the same time, you must provide this file in the root directory of each RealServer site. The content is set to OK, it is easy for the front-end ldirectord service to detect whether the backend RealServer is online.

# cd /etc/ha.d# vim ldirectord.cf# Sample for an http virtual servicevirtual=172.16.103.50:80        real=172.16.103.3:80 gate        real=172.16.103.4:80 gate        fallback=127.0.0.1:80 gate        service=http        request="index.html"        receive="OK"        virtualhost=some.domain.com.au        scheduler=rr

In addition, you must first configure the ipvs rule using the ipvsadm command, and then save it in the ipvs default configuration file for ldirectord to call.

# Ipvsadm-a-t 172.16.103.50: 80-s RR # Set the VIP address to 172.16.103.50 # ipvsadm-a-t 172.16.103.50: 80-r 172.16.103.3-G # ipvsadm-a-t 172.16.103.50: 80-r 172.16.103.4-G # service ipvsadm save # service ipvsadm stop

Test and start ldirectord on the high-availability node to check whether the ipvs rule takes effect:

# service ldirectord start[[email protected] ha.d]# service ldirectord startStarting ldirectord... success[[email protected] ha.d]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  172.16.103.50:80 rr  -> 172.16.103.3:80              Route   1      0          0      -> 172.16.103.4:80              Route   1      0          0

After the rule takes effect, you can try to disable the HTTPd service of the backend RealServer and check whether the fallback server of the front-end ldirectord can be launched normally.

Disable the RealServer service:

# service httpd stop

Test the S rule on director:

[[email protected] ha.d]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  172.16.103.50:80 rr  -> 127.0.0.1:80                 Local   1      0          0           -> 172.16.103.3:80              Route   0      0          0      -> 172.16.103.4:80              Route   0      0          0

After the rule file and resource proxy are configured, copy the two files to another highly available service node.

# scp /etc/ha.d/ldirectord.cf node2:/etc/ha.d# scp /etc/sysconfig/ipvsadm node2:/etc/sysconfig

After the replication is complete, use the same method on the other node to test whether the ldirectord service can be used normally.

3. Configure the heartbeat haresource file to define the resources, proxies, and running nodes of the high-availability cluster:

Node2.cluster.com 172.16.103.20/16/eth0/172.16.103.255 ldirectord:/etc/ha. d/ldirectord. cf

Start the heartbeat service on two nodes:

# service heartbeat start# ssh node2 ‘service heartbeat start‘

The specific configurations of realservers under the Dr model are not provided. Refer to the previous Dr model blog.

Enter the defined VIP address in the browser and access 172.16.103.50. The result is as follows:

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/49/1B/wKiom1QPHqfQpw4lAADHKLOGe04324.jpg "Title =" 2014-09-09 23_36_44-172.16.103.50.jpg "alt =" wkiom1qphqfqpw4laadhkloge04324.jpg "/>

On the highly available node where the service is started, you can view the following resource results:

[[email protected] ha.d]# ifconfigeth0      Link encap:Ethernet  HWaddr 00:0C:29:E1:37:51            inet addr:172.16.103.2  Bcast:172.16.255.255  Mask:255.255.0.0          inet6 addr: fe80::20c:29ff:fee1:3751/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:184223 errors:0 dropped:0 overruns:0 frame:0          TX packets:125070 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000           RX bytes:110659582 (105.5 MiB)  TX bytes:21467745 (20.4 MiB)eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:E1:37:51            inet addr:172.16.103.50  Bcast:172.16.103.255  Mask:255.255.0.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1lo        Link encap:Local Loopback            inet addr:127.0.0.1  Mask:255.0.0.0          inet6 addr: ::1/128 Scope:Host          UP LOOPBACK RUNNING  MTU:16436  Metric:1          RX packets:0 errors:0 dropped:0 overruns:0 frame:0          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)[[email protected] ha.d]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  172.16.103.50:80 rr  -> 172.16.103.3:80              Route   1      0          0     -> 172.16.103.4:80              Route   1      0          0

If problems occur on the backend RealServer at the same time, you can see that the fallback server is working in the ipvs rule.

[[email protected] ha.d]# ipvsadm -L -nIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  172.16.103.50:80 rr  -> 127.0.0.1:80                 Local   1      0          0           -> 172.16.103.3:80              Route   0      0          1         -> 172.16.103.4:80              Route   0      0          1

In this way, the frontend is the high-availability cluster of LVS, And the backend is the simple model configuration of the LB Load Balancing cluster of httpd.

Use ldirectord to monitor backend RS Health Status and implement LVS Scheduling

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.