Server Load balancer cluster solution (I) LVS-DR

Source: Internet
Author: User
Tags ftp connection

LVS's working mechanism and scheduling algorithm have been recorded in my previous articles. See here

The working mechanism of LVS is similar to iptabls. Some work in the user space (ipvsadmin) and some work in the kernel space. user space: used to define some load balancing objects and policies, for example, load the persistent connection to port 80 of the TCP protocol, or load the connection to port 3306 of the TCP protocol. Some rules that meet your needs define the kernel space: it is used to forward data packets according to the rules defined in the user space. In the Server Load balancer architecture, all roles use an extra IP address --- VIP, when a client initiates a request to the VIP, the request must be directly connected to the Directory instead of the backend realserver. Otherwise, the Server Load balancer architecture is meaningless. Therefore, after the client sends a connection request to the VIP, the Directory can only send its MAC address to the client or the routing device in the network. Directory will forward the request to the backend realserver according to a defined scheduling algorithm according to the user-defined load rules. If the backend realserver responds to a client request when the client establishes a connection to the VIP, the client will establish a VIP ing between the VIP and the MAC of the realserver responding to the request in the MAC address table for future communication, at this moment, the client seems to have only one realserver and cannot realize the existence of other servers. To avoid this situation, there are four solutions based on the actual situation:

1. Disable the RealServer from responding to ARP requests to the VIP; 2. Hide the VIP on the RealServer so that they cannot obtain ARP requests on the network; 3. Based on the Transparent Proxy) "or" fwmark firewall mark) "; 4. Disable ARP requests from RealServers;

After Linux kernel 2.4.26, two new signs for adjusting the ARP stack were introduced: arp_announce: The restriction level arp_ignore adopted when I notified others of my MAC: different models used to respond to ARP broadcast requests from others Arp_announce type:0 -- default: Use any local address to advertise 1 -- when the local machine has multiple IP addresses, try to advertise to the same network segment 2 -- be sure to use the IP address of the same network segment for notice Arp_ignore type:0 -- default, regardless of the local IP address, it will use any interface to respond to 1 -- only the IP address of the NIC that responds directly to the request is the target address


Preliminary understanding of the above, you can set up the LVS-DR model: Environment Introduction: System: RHEL5 DIP: 172.23.136.139 RS1: 172.23.136.149 RS2: 172.23.136.148 VIP: 172.23.136.150


(1) Directory configuration 1. Using ipvsadm for load balancing needs to be re-compiled in the early stage. However, the current version of redhat has been directly implemented in the kernel by default. # Check whether the kernel supports ipvs Modprobe ip_vs Cat/proc/net/ip_vs2. Install ipvsadm Yum install ipvsadm-yAnd start ipvsadm Service ipvsadm startAt the first startup, a No such file or directory error is reported, because lvs and iptables both allow users to save the configured rules in a file, when the system or service is restarted, the rule file will be re-read to ensure that the rule is permanently valid. The rule file does not exist because it is the first time the rule is started and the rule is not defined. An error is returned when the file is re-viewed during service startup. 3. Start the ipvs script: Service S start
 
 
  1. #!/bin/bash 
  2. # LVS script for VS/DR 
  3. . /etc/rc.d/init.d/functions 
  4. VIP=172.23.136.150 
  5. RIP1=172.23.136.149 
  6. RIP2=172.23.136.148 
  7. PORT=80 
  8.  
  9. case "$1" in 
  10. start)            
  11.  
  12.   /sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.255 up 
  13.   /sbin/route add -host $VIP dev eth0:1 
  14.  
  15. # Since this is the Director we must be able to forward packets 
  16.   echo 1 > /proc/sys/net/ipv4/ip_forward 
  17.  
  18. # Clear all iptables rules. 
  19.   /sbin/iptables -F 
  20.  
  21. # Reset iptables counters. 
  22.   /sbin/iptables -Z 
  23.  
  24. # Clear all ipvsadm rules/services. 
  25.   /sbin/ipvsadm -C 
  26.  
  27. # Add an IP virtual service for VIP 172.23.136.150 port 80 
  28. # In this recipe, we will use the round-robin scheduling method.  
  29. # In production, however, you should use a weighted, dynamic scheduling method.  
  30.   /sbin/ipvsadm -A -t $VIP:80 -s wlc 
  31.  
  32. # Now direct packets for this VIP to 
  33. # the real server IP (RIP) inside the cluster 
  34.   /sbin/ipvsadm -a -t $VIP:80 -r $RIP1 -g -w 1 
  35.   /sbin/ipvsadm -a -t $VIP:80 -r $RIP2 -g -w 2 
  36.  
  37.   /bin/touch /var/lock/subsys/ipvsadm &> /dev/null 
  38. ;;  
  39.  
  40. stop) 
  41. # Stop forwarding packets 
  42.   echo 0 > /proc/sys/net/ipv4/ip_forward 
  43.  
  44. # Reset ipvsadm 
  45.   /sbin/ipvsadm -C 
  46.  
  47. # Bring down the VIP interface 
  48.   /sbin/ifconfig eth0:1 down 
  49.   /sbin/route del $VIP 
  50.    
  51.   /bin/rm -f /var/lock/subsys/ipvsadm 
  52.    
  53.   echo "ipvs is stopped..." 
  54. ;; 
  55.  
  56. status) 
  57.   if [ ! -e /var/lock/subsys/ipvsadm ]; then 
  58.     echo "ipvsadm is stopped ..." 
  59.   else 
  60.     echo "ipvs is running ..." 
  61.     ipvsadm -L -n 
  62.   fi 
  63. ;; 
  64. *) 
  65.   echo "Usage: $0 {start|stop|status}" 
  66. ;; 
  67. esac 


(2) RealServer Configuration
Run the dedicated sclient script on each realserver. The script content is as follows:

 
 
  1. #!/bin/bash 
  2. # Script to start LVS DR real server. 
  3. # description: LVS DR real server 
  4. .  /etc/rc.d/init.d/functions 
  5.  
  6. VIP=172.23.136.150 
  7. host=`/bin/hostname` 
  8.  
  9. case "$1" in 
  10. start) 
  11.        # Start LVS-DR real server on this machine. 
  12.         /sbin/ifconfig lo down 
  13.         /sbin/ifconfig lo up 
  14.         echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore 
  15.         echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce 
  16.         echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore 
  17.         echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce 
  18.  
  19.         /sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up 
  20.         /sbin/route add -host $VIP dev lo:0 
  21.  
  22. ;; 
  23. stop) 
  24.  
  25.         # Stop LVS-DR real server loopback device(s). 
  26.         /sbin/ifconfig lo:0 down 
  27.         echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore 
  28.         echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce 
  29.         echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore 
  30.         echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce 
  31.  
  32. ;; 
  33. status) 
  34.  
  35.         # Status of LVS-DR real server. 
  36.         islothere=`/sbin/ifconfig lo:0 | grep $VIP` 
  37.         isrothere=`netstat -rn | grep "lo:0" | grep $VIP` 
  38.         if [ ! "$islothere" -o ! "isrothere" ];then 
  39.             # Either the route or the lo:0 device 
  40.             # not found. 
  41.             echo "LVS-DR real server Stopped." 
  42.         else 
  43.             echo "LVS-DR real server Running." 
  44.         fi 
  45. ;; 
  46. *) 
  47.             # Invalid entry. 
  48.             echo "$0: Usage: $0 {start|status|stop}" 
  49.             exit 1 
  50. ;; 
  51. esac 

 

 
(3) test

Access 172.23.136.150 to check whether the Server Load balancer is working properly.

 

Now I have configured phpmyadmin on both backend realservers, And the account and password of both databases are the same. Currently, When you access 172.23.136.150/phpmyadmin, you will find that you are always unable to log on (no weight is defined when you use ipvsadm to define rules)
Therefore, Directory also needs to implement "Connection Tracing" to always send requests from the same client to the realserver it is allocated to for the first time. ipvs will maintain a hash table within it, the table stores the backend realservers distributed by different clients for the first request, and the time for saving the request. After the time consumption, the connection is not closed yet, this will automatically delay the defined persistent connection time. When the next request is received, it will be compared in this table, and the request will be distributed to the corresponding realserver in the entry to ensure the integrity of the entire request.

The persistent connection types of lvs are as follows:

1. pcc: Persistent client connection. When a rule is specified, port 0 is used to represent all ports, that is, all requests arriving at the VIP are loaded to the backend realserver according to the scheduling algorithm.
2. ppc: Persistent port connection, specifying the port on which the VIP request is sent to the backend realserver
3. netfilter marked packets: a persistent connection marked by the firewall. It is mainly used for association between multiple port protocols. For example, after a product is selected on port 80 on an e-commerce website, when the payment is made, it will jump to port 443.
4. Persistent FTP connections are rarely used for active and passive connections.

 

(1) pcc
Persistent connections of any type only need to be configured on Directory, and realserver does not need to be configured, because these only involve the Directory request distribution method.

Ipvsadm-C
Ipvsadm-A-t 172.23.136.150: 0-p 360
Ipvsadm-a-t 172.23.136.150: 0-r 172.23.136.148
Ipvsadm-a-t 172.23.136.150: 0-r 172.23.136.149

Execute the above configuration on the basis of the previous configuration.

(2) ppc
Ipvsadm-C
Ipvsadm-A-t 172.23.136.150: 80-p 360
Ipvsadm-a-t 172.23.136.150: 80-r 172.23.136.148
Ipvsadm-a-t 172.23.136.150: 80-r 172.23.136.149

(3) Persistent firewall Marking
Persistent firewall tags must be used in combination with iptables.
Iptables-t mangle-a prerouting-I eth0-p tcp-d 172.23.136.150-m multiport -- dport 80,443-j MARK -- set-mark 1

### Bind requests from port 80 and port 443 of port 172.23.136.150 from eth0 with the label 1
Ipvsadm-C
Ipvsadm-A-f 1-p 360
Ipvsadm-a-f 1-r 172.23.136.148
Ipvsadm-a-f 1-r 172.23.136.149

(4) Persistent FTP connection
First, understand the FTP working mode: 21 control port 20 data port
Passive connection randomly selects a port from 1024---65000 as the response port, so we need to limit the range of the passive connection Response port.
Edit the FTP software you use, vsftpd or pure-ftpd, set the port range and tag iptables.
Iptables-t mangle-a prerouting-I eth0-p tcp-d 192.168.2.100 -- dport 21-j MARK -- set-mark 1
Iptables-t mangle-a prerouting-I eth0-p tcp-d 192.168.2.100 -- dport restart :12000-j MARK -- set-mark 1
Ipvsadm-C
Ipvsadm-A-f 1-p 360
Ipvsadm-a-f 1-r 172.23.136.148
Ipvsadm-a-f 1-r 172.23.136.149

Summary:
Access to 172.23.136.150/phpmyadmin. Now you can log on normally because the connection is permanently distributed to the same backend realserver. The entire request is complete.

 

Check the connection allocation status. You can find that requests from the same client are directed to the same realserver on the backend.

Ipvsadm-lcn
IPVS connection entries
Pro expire state source virtual destination
TCP 0:55 FIN_WAIT 172.23.136.93: 56944 172.23.136.150: 80 172.23.136.149: 80
TCP 056 FIN_WAIT 172.23.136.93: 56947 172.23.136.150: 80 172.23.136.149: 80
TCP 039 FIN_WAIT 172.23.136.93: 56928 172.23.136.150: 80 172.23.136.149: 80
TCP 056 FIN_WAIT 172.23.136.93: 56946 172.23.136.150: 80 172.23.136.149: 80
TCP 039 FIN_WAIT 172.23.136.93: 56930 172.23.136.150: 80 172.23.136.149: 80
TCP 0:55 FIN_WAIT 172.23.136.93: 56943 172.23.136.150: 80 172.23.136.149: 80
TCP 0:55 FIN_WAIT 172.23.136.93: 56945 172.23.136.150: 80 172.23.136.149: 80
TCP 0:47 FIN_WAIT 172.23.136.93: 56933 172.23.136.150: 80 172.23.136.149: 80
TCP 039 FIN_WAIT 172.23.136.93: 56931 172.23.136.150: 80 172.23.136.149: 80
TCP 14: 57 ESTABLISHED 172.23.136.93: 56948 172.23.136.150: 80 172.23.136.149: 80
TCP 05:50 NONE 172.23.136.93: 0 172.23.136.150: 80 172.23.136.149: 80
TCP 039 FIN_WAIT 172.23.136.93: 56932 172.23.136.150: 80 172.23.136.149: 80

 

 

 

 

This article is from My --- Dream. * blog, please be sure to keep this http://grass51.blog.51cto.com/4356355/982583

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.