Heartbeat + ldirectord + lvsnat

Source: Internet
Author: User

Heartbeat + ldirectord + lvsnat

Ldirectord can be used to manage lvs. It can regularly check the backend realserver and automatically clear the fault after detection. After the fault is restored, the lvs table is automatically added. Let's take a look.

Lvs end:

Node1

Eth0: 192.168.3.124

Eth0: 192.168.8.124 for heartbeat

Vip: 192.168.3.233

Node2

Eth0: 192.168.3.126

Eth0: 192.168.8.126 for heartbeat

Vip: 192.168.3.233

Realserver:

Web1

Eth0: 192.168.3.128

Eth0: 0: 192.168.8.128

Vip: 192.168.3.233

Web2

Eth0: 192.168.3.129

Eth0: 0: 192.168.8.129

Vip: 192.168.3.233

1. We recommend that you configure lvs first. If no problem persists, configure other parts.

Configure lvs nat mode on node1 and node2. Use node1 as an example.

Note: first create a vip on node1 for lvs testing. If the test is correct, delete it, and then use the heartbeat vip to reduce the error rate.

Node 1 configuration on lvs:

Ifconfig eth0: 1 192.168.3.233 netmask 255.255.255.0

Ipvsadm-A-t 192.168.3.233: 80-s wrr

Ipvsadm-A-t 192.168.3.233: 80-r 192.168.8.128: 80-g

Ipvsadm-A-t 192.168.3.233: 80-r 192.168.8.129: 80-g

Configuration of realserver web1 and web2:

Ip route add 192.168.3.0 via 192.168.3.233

Note: Direct the realserver gateway to vip 192.168.2.233

Test After Configuration:

[Root @ usvr-126 ha. d] # curl 192.168.3.233/1.html

Hello, 192.168.3.128

[Root @ usvr-126 ha. d] # curl 192.168.3.233/1.html

Hello, 192.168.3.129

The above description indicates that lvs is successfully configured. Cancel eth0: 1, ifdown eth0: 1, and perform the same configuration on node2.

2. Configure ldirectord

Rpm-ql heartbeat-ldirectord

Cp/usr/share/doc/ldirectord-1.0.4/ldirectord. cf/etc/ha. d

Vim/etc/ha. d/ldirectord. cf

# Global Directiveschecktimeout = 20 # interval for determining real server errors. Checkinterval = 10 # specify the interval between two checks by ldirectord. Fallback = 127.0.0.1: 80 # address of web Service redirection when all real server nodes cannot work. Autoreload = yes # Whether to automatically reload the configuration file. If yes is selected, the configuration file changes and the configuration information is automatically loaded. Logfile = "/var/log/ldirectord. log" # set the path of the ldirectord log output file. Quiescent = no # When no is selected, if a node does not respond within the time period set for checktimeout, ldirectord will remove the real server directly from the routing table of LVS, the existing client connection will be interrupted, and LVS will lose all connection trace records and continuous connection templates. If yes is selected, when a real server fails, ldirectord sets the weight of the failed node to 0, and the new connection cannot be reached, but the node is not cleared from the LVS route table, connection trace records and program connection templates are retained on ctor. # Sample for an http virtual servicevirtual = 192.168.3.233: 80 # specify the virtual IP address and port number. Note that the line after the virtual line must be real = 192.168.8.128: 80 masq # specify the address and port of the Real Server, set the LVS working mode, use gate to represent the DR mode, ipip to represent the TUNL mode, and masq to represent the NAT mode. Real = 192.168.8.129: 80 masq fallback = 127.0.0.1: 80 masq service = http # specify the service type. Here, load balancing is performed for the http service. Request = "inclusadm.html" # ldirectord sends an access request based on the specified Real Server address and the request Path provided by this option to check whether the services on the Real Server are running normally, make sure that the page address provided here is accessible. Otherwise, ldirectord will mistakenly believe that the node has expired and error monitoring will occur. Receive = "test OK" # specify the request and response strings. Scheduler = wrr # specify the scheduling algorithm. Here, the rr algorithm is used. Protocol = tcp # specifies the protocol type. LVS supports TCP and UDP. Checktype = negotiate # specify the Ldirectord detection type. The default value is negotiate. Checkport = 80 # specifies the monitoring port number. # Virtualhost = www.gaojf.com # specifies the name of the virtual server.

Configure the role sadm.html on web1 and web2.

Echo "test OK"> ipvsadm.html

After configuring ldirectord on node1, copy it to node2.

3. Configure heartbeat

For heartbeat configuration, see the previous blog post "heartbeat implements nginx hot backup". Here, you only need to change the/etc/ha. d/haresources resources, and other resources do not need to be moved. As follows:

Nod1 Terminal/etc/ha. cf

logfile/var/log/ha-loglogfacilitylocal0keepalive 2deadtime 30warntime 10initdead 120udpport694ucast eth0 192.168.8.126auto_failback onnodeusvr-124.cityre.cnnodeusvr-126.cityre.cnping 192.168.3.1respawn hacluster /usr/lib64/heartbeat/ipfail

Node2/etc/ha. cf

logfile/var/log/ha-loglogfacilitylocal0keepalive 2deadtime 30warntime 10initdead 120udpport694ucast eth0 192.168.8.124auto_failback onnodeusvr-124.cityre.cnnodeusvr-126.cityre.cnping 192.168.3.1respawn hacluster /usr/lib64/heartbeat/ipfail

Node1 and node2/etc/ha. d/haresources

Usvr-124.cityre.cn IPaddr: 192.168.3.233/24/eth0 ldirectord: ldirectord. cf

Note: the host of the other party needs to be written to node1 and node2.

Usvr-124.cityre.cn 192.168.3.124

192.168.3.126 usvr-126.cityre.cn

Note: The vip here is our lvs vip, so as to realize the vip drift.

Iv. Test

Start service heartbeat start after heartbeat is configured

1. Check whether the vip instance is started.

[Root @ usvr-124 ha. d] # ip
1: lo: <LOOPBACK, UP, LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
Link/loopback 00: 00: 00: 00: 00: 00 brd 00: 00: 00: 00: 00: 00
Inet 127.0.0.1/8 scope host lo
Inet6: 1/128 scope host
Valid_lft forever preferred_lft forever
2: eth0: <BROADCAST, MULTICAST, UP, LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
Link/ether 00: 50: 56: 89: a2: 16 brd ff: ff
Inet 192.168.3.124/24 brd 192.168.3.255 scope global eth0
Inet 192.168.8.124/24 brd 192.168.8.255 scope global eth0: 0
Inet6 fe80: 250: 56ff: fe89: a216/64 scope link
Valid_lft forever preferred_lft forever
[Root @ usvr-124 ha. d] # ip
1: lo: <LOOPBACK, UP, LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
Link/loopback 00: 00: 00: 00: 00: 00 brd 00: 00: 00: 00: 00: 00
Inet 127.0.0.1/8 scope host lo
Inet6: 1/128 scope host
Valid_lft forever preferred_lft forever
2: eth0: <BROADCAST, MULTICAST, UP, LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
Link/ether 00: 50: 56: 89: a2: 16 brd ff: ff
Inet 192.168.3.124/24 brd 192.168.3.255 scope global eth0
Inet 192.168.8.124/24 brd 192.168.8.255 scope global eth0: 0
Inet 192.168.3.233/24 brd 192.168.3.255 scope global secondary eth0
Inet6 fe80: 250: 56ff: fe89: a216/64 scope link
Valid_lft forever preferred_lft forever

2. View The lvs table

[Root @ usvr-124 ha. d] # ipvsadm-L-n
IP Virtual Server version 1.2.1 (size = 4096)
Prot LocalAddress: Port sched1_flags
-> RemoteAddress: Port Forward Weight ActiveConn InActConn
TCP 192.168.3.233: 80 wrr
-> 192.168.8.128: 80 Masq 1 0 0
-> 192.168.8.129: 80 Masq 1 0 0

3. Stop nginx at the web1 end and view the lvs table in about 20 seconds.

[Root @ usvr-124 ha. d] # ipvsadm-L-n
IP Virtual Server version 1.2.1 (size = 4096)
Prot LocalAddress: Port sched1_flags
-> RemoteAddress: Port Forward Weight ActiveConn InActConn
TCP 192.168.3.233: 80 wrr
-> 192.168.8.129: 80 Masq 1 0 0

It is found that the node 1 end 192.168.8.128: 80 is clear from the lvs table, indicating that ldirectord works.

4. Restart nginx on web1 and view the lvs table.

[Root @ usvr-124 ha. d] # ipvsadm-L-n
IP Virtual Server version 1.2.1 (size = 4096)
Prot LocalAddress: Port sched1_flags
-> RemoteAddress: Port Forward Weight ActiveConn InActConn
TCP 192.168.3.233: 80 wrr
-> 192.168.8.128: 80 Masq 1 0 0
-> 192.168.8.129: 80 Masq 1 0 0

It is found that node1 is added to the lvs table again.

5. Stop nginx at the web1 and web2 ends and view the lvs table.

[Root @ usvr-124 nginx1.6] # ipvsadm-L-n
IP Virtual Server version 1.2.1 (size = 4096)
Prot LocalAddress: Port sched1_flags
-> RemoteAddress: Port Forward Weight ActiveConn InActConn
TCP 192.168.3.233: 80 wrr
-> 127.0.0.1: 80 Local 1 0 0

When all the realserver nodes of lvs are down, 127.0.0.1: 80 on node1 is started. When you access the vip address

Curl 192.168.3.233/1.html displays "the page is being maintained" to improve the user experience.

6. If node1 goes down, node2 takes over the vip

Service heartbeat stop on node1

View ip address on node2

[Root @ usvr-126 ha. d] # ip
1: lo: <LOOPBACK, UP, LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
Link/loopback 00: 00: 00: 00: 00: 00 brd 00: 00: 00: 00: 00: 00
Inet 127.0.0.1/8 scope host lo
Inet6: 1/128 scope host
Valid_lft forever preferred_lft forever
2: eth0: <BROADCAST, MULTICAST, UP, LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
Link/ether 00: 50: 56: 89: 91: a0 brd ff: ff
Inet 192.168.3.126/24 brd 192.168.3.255 scope global eth0
Inet 192.168.8.126/24 brd 192.168.8.255 scope global eth0: 0
Inet 192.168.3.233/24 brd 192.168.3.255 scope global secondary eth0
Inet6 fe80: 250: 56ff: fe89: 91a0/64 scope link
Valid_lft forever preferred_lft forever

7. View The lvs table on node2

[Root @ usvr-126 ha. d] # ipvsadm-L-n
IP Virtual Server version 1.2.1 (size = 4096)
Prot LocalAddress: Port sched1_flags
-> RemoteAddress: Port Forward Weight ActiveConn InActConn
TCP 192.168.3.233: 80 wrr
-> 192.168.8.128: 80 Masq 1 0 0
-> 192.168.8.129: 80 Masq 1 0 0

Summary:

1. on node1 and node2, lvs does not run at the same time. When the vip is on node1, lvs stops on node2. when the vip is on node2, lvs stops on node1.

2. ldirectord does not need to be used in non-paired mode. We can also use it separately to manage lvs. That is to say, we do not need to add ldirectord to the heartbeat resource file.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.