How to Make LVS and RealServer work on the same machine?

Source: Internet
Author: User

How to Make LVS and RealServer work on the same machine?

1,573 read comments (no comments currently) More 0

We have a simple and inexpensive LVS-DR setting. Two servers run database services. These requests are directed to the servers or other servers on the local machine. Because pure
4-layer scheduling, which is my favorite method. This time, I want to directly run the LVS service on these two servers. Not on other machines.

Therefore, keepalived is used for configuration and Master/Slave. Also on this machine. It is a perfect solution.

However, I asked my colleagues to help with configuration. It never works normally. The following is a simple architecture diagram and configuration output:

The structure is as follows:

  ip_vs() balances on VIP:port                    CIP             CIP    |              v     | CIP->MAC of eth0 on backup  normal packet             VIP    | MAC of eth0 on active<-CIP  spurious packet              |—————-              |            |           eth0 VIP    eth0 VIP           _______       _______          |       |     |       |active    |       |     |       | backup          |_______|     |_______|

The configuration output is as follows. You can see that there is a difference, that is, there is a local.

#  ipvsadm -L -nIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnFWM  6 rr  -> 192.168.1.233:3306           Local   1      0          0  -> 192.168.1.213:3306           Route   1      0          0

The configuration on the two machines is like this (^-^ there are actually small changes above). In this architecture, I only need to stop my backup of keepalived and it will work normally, otherwise, one of the databases will be connected to the database, and the database will be interrupted and stopped. Packets are captured and a large number of packets are continuously forwarded.

Let's analyze it in detail.

  1. The client sends a connection request to the specified port of the VIP.
  2. The current ctor will select two realservers to forward requests. It will send data to the local Nic of localnode or the specified Mac eth0 on the backup machine (also known as RealServer ). normal LVS, these packets will be received by the program listening to this VIP.
  3. If the packet is sent to the eth0 interface of the backup ctor server. It cannot be normally received by the program listening to the specified port, because the packet first passes through ip_vs ().
  4. At this time, there is a 50% chance to forward the packet. A standard response packet is generated to the client. Because the client can respond normally, The LVS function is normal. We want all the packages to be processed directly by the listener. You do not want to forward data directly through ip_vs.
  5. At this time, 50% of the data packets will be directly transferred to the eth0/VIP of the master LVS.
  6. We do not want data packets to be converted from the backup LVS to the master LVS.
  7. Therefore, we need to send eth0 to the VIP package, as long as it is not sent by other LVS, ip_vs () is used for processing.

Simply put: when the client sends a data packet to the VIP. For example, if our director1 (master) interface is working, LVS can receive the package and load balance according to the keepalived configuration. In this case, director1 will use the LVS-DR function to route the package to itself or director2 (Backup ).

In this example, keepalived is used. In this case, director2 is a VIP backup server. In this case, keepalived immediately starts the rule of using ipvsadm by default to configure how the server performs backup processing, so as to achieve faster failover. Therefore, all these rules will exist on the backup director2 host.

This is a problem. When the master node is from director1 (master node), for example, RR is used. It forwards about 50% of packets from director1 to port 3306 of director2 (Backup. This is because director2 is doing load balance once because the configuration rules for these LVS-DR are then given to these packages. And send it back to director1. an endless loop is generated.

Over time, not only does your server fail to process the connection normally, but your server will also crash and the connections will be repeated between them or at the backend.

Solution: pack the mark for the packet that enters eth0, when the packet is sent to VIP: 80 and Mac is not other LVS server. to load the specified fwmark into LVS. As long as the data packet is forwarded from any other MAC address (non-LVS forwarding) to the VIP: port, the data packet will not be loadbalanced, but will be directly transferred to the demon program to be monitored for application processing. Actually, we use iptables to set mark for incoming traffic. Then we configure keepalived.
Only traffic with a mark is processed. Do not use the previously bound VIP and port.

The iptables configuration is as follows:

At the same time to serve the LVS-DR, but also as the back-end of the database. Therefore, we must note that only one ctor packet is received.

In this case, we set in director1 ($ mac_director2 refers to the MAC address of the package sent from director2 on director1 ):

iptables -t mangle -I PREROUTING -d $VIP -p tcp -m tcp --dport $VPORT -m mac  ! --mac-source $MAC_Director2 -j MARK --set-mark 0x3

In the backup's keepalived server director2, set:

iptables -t mangle -I PREROUTING -d $VIP -p tcp -m tcp --dport $VPORT -m mac  ! --mac-source $MAC_Director1 -j MARK --set-mark 0x4

Configure the two in keepalived.

Director1: virtual_server fwmark 3 {Director2: virtual_server fwmark 4 {

In fact, the complete configuration is as follows:

Keepalived configuration method based on Mark

virtual_server fwmark 4  {    delay_loop 10    lb_algo rr    lb_kind DR    protocol TCP    real_server 192.168.1.213 3306 {    weight 1    MISC_CHECK {        misc_path "/etc/keepalived/check_slave.pl 192.168.1.213"        misc_dynamic    }    }    real_server 192.168.1.233 3306 {    weight 1    MISC_CHECK {        misc_path "/etc/keepalived/check_slave.pl 192.168.1.233"        misc_dynamic        }    }}

References: http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.cluster_friendly_applications.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.