"Turn" Lvs/nginx how to deal with session problems

Source: Internet
Author: User

Original address: http://network.51cto.com/art/201005/200279.htm

The session sessions are persisted by setting the value of the persistence.

The "51cto.com exclusive feature" Business system architecture is:

Extension one: Nginx (master) +keepalived+nginx (Backup) +3 Web cluster +mysql (master-slave) +EMC CLARiiON CX4 Storage
Extension two: LVs (master) +keepalived+lvs (Backup) + 3 Web cluster +mysql (master-slave) +EMC CLARiiON CX4 Storage

Operating system with 64-bit rhel5.4/centos5.4, the server uses HP360G6+HP580G5, the most front-end firewall of the business system for the usg5000+waf-t3-500 (anti-DDoS, phishing and injection attacks, etc.)

Extension one, such as the use of Nginx load balancer, the use of Ip_hash to replace the default RR mode, that is, a client IP request through the hashing algorithm to locate the same back-end Web server, so as to avoid the session lost, solve the session problem. However, the ip_hash instruction does not guarantee the load balancing of the backend servers, some backend servers may receive more requests, and some back-end servers receive fewer requests, which means that load balancing is lost. Our solution is to write the user's login session information into the back-end of the MySQL database, this in the back of the CMS system also implemented, the effect is good; later I proposed a compromise, if nginx concurrent connection number ( That is, the nginx load Balancer's nginxstatus active connections) >2000, that is, to write the session into the MySQL database method, if the number of concurrent small, ip_hash effect is quite good.

In addition, if the Ip_hash parameter is added to the upstream, the test finds that a server in the background will not automatically jump after it is hung, it is recommended to use the following wording:

    1. Upstream njzq.com {
    2. Ip_hash;
    3. Server 172.16.94.216:9000 max_fails=0;
    4. Server 172.16.94.217:9000 max_fails=0;
    5. Server 172.16.94.218:9000 max_fails=0;
    6. }

In the second, the LVS adopts the IPVSADM-P scheme, the persistence-session hold time, the unit is seconds. I generally set to 120s, this option is useful for dynamic Web sites: When the user from the remote account login site, with this session hold function, you can forward the user's request to the same application server. When the user first visited, his access request was transferred to a real server by the load balancer, so he saw a landing page, the first access is complete, then he filled in the Login box user name and password, and then submit; At this point, the problem may arise-the login cannot be successful. Because there is no session hold, the load balancer may forward the 2nd request to the other server. So after the setting is not the front of the client and the subsequent servers are always recommended connection, pretty or after 120 seconds or switch to another real physical server? I tried the following experiment, the LVS using a single table, The 192.168.1.102,VIP is 192.168.1.188 and the backend is two Web servers, 192.168.1.103 and 192.168.1.104.

LVS above executes the following script, the two real servers to execute the relevant script, the binding VIP address 192.168.1.188;lvs and the real physical server respectively using lvs_dr.sh and real.sh scripts

  1. [email protected] lvs]# cat lvs_dr.sh
  2. #!/bin/bash
  3. # website Director VIP.
  4. sns_vip=192.168.1.188
  5. sns_rip1=192.168.1.103
  6. sns_rip2=192.168.1.104
  7. . /etc/rc.d/init.d/functions
  8. Logger $ called with $
  9. Case "$" in
  10. Start
  11. # Set Squid VIP
  12. /sbin/ipvsadm--set 30 5 60
  13. /sbin/ifconfig eth0:0 $SNS _VIP broadcast $SNS _VIP netmask 255.255.255.255 broadcast $SNS _VIP up
  14. /sbin/route add-host $SNS _vip Dev eth0:0
  15. /sbin/ipvsadm-a-T $SNS _vip:80-s wlc-p 120
  16. /sbin/ipvsadm-a-T $SNS _vip:80-r $SNS _rip1:80-g-W 1
  17. /sbin/ipvsadm-a-T $SNS _vip:80-r $SNS _rip2:80-g-W 1
  18. Touch/var/lock/subsys/ipvsadm >/dev/null 2>&1
  19. ;;
  20. Stop
  21. /sbin/ipvsadm-c
  22. /sbin/ipvsadm-z
  23. Ifconfig eth0:0 Down
  24. Route del $SNS _VIP
  25. Rm-rf/var/lock/subsys/ipvsadm >/dev/null 2>&1
  26. echo "Ipvsadm stoped"
  27. ;;
  28. Status
  29. if [!-e/var/lock/subsys/ipvsadm];then
  30. echo "Ipvsadm stoped"
  31. Exit 1
  32. Else
  33. echo "Ipvsadm OK"
  34. Fi
  35. ;;
  36. *)
  37. echo "Usage: $ {Start|stop|status}"
  38. Exit 1
  39. Esac
  40. Exit 0

Two Web real physical servers running real.sh scripts

  1. #!/bin/bash
  2. sns_vip=192.168.1.188
  3. . /etc/rc.d/init.d/functions
  4. Case "$" in
  5. Start
  6. Ifconfig lo:0 $SNS _vip netmask 255.255.255.255 broadcast $SNS _VIP
  7. /sbin/route add-host $SNS _vip Dev lo:0
  8. echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
  9. echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
  10. echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
  11. echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
  12. Sysctl-p >/dev/null 2>&1
  13. echo "Realserver Start OK"
  14. ;;
  15. Stop
  16. Ifconfig lo:0 Down
  17. Route del $SNS _VIP >/dev/null 2>&1
  18. echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
  19. echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
  20. echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
  21. echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
  22. echo "Realserver stoped"
  23. ;;
  24. *)
  25. echo "Usage: $ {start|stop}"
  26. Exit 1
  27. Esac
  28. Exit 0

It is observed that when the client 192.168.1.100 initiates the first connection request, the LVS load balancer assigns it to the 192.168.1.104 of the real physical server behind it, and after three handshakes, the connection status is established. Then, for quite some time after terminating the TCP connection, the status of the real Web server is fin_wait, and the new connection that 192.168.1.100 initiated during this period of time is connected to 192.168.1.104.

Note: The dynamic site refers to PHP landing, if the backend is a cache cluster, this session option can try to remove, but I use the CDN are used F5 hardware, for the moment has not been a chance to test.

In the project implementation, I communicate with my colleagues habits of the entire system structure into three layers, namely: Load balancer layer, Web layer and database layer, I found that everyone likes to say that the concept of clustering, I feel that the concept of confusion, although I know they refer to the LVs this piece, I prefer to use load balancing this professional terminology The load balancer, the Nginx/lvs I mentioned above, can allocate the client's request to the backend server cluster, such as Apache, Tomcat, squid cluster, etc, according to different algorithms, and high availability is to failover the most front-end load balancer, that is, in a very short time ( <1S) Replace the standby machine with a faulty machine, the current mature load is thin high available architecture has lvs+keepalived, nginx+keepalived (heartbeat I mainly used in intranet development environment, not put into production environment); If you want to say it as a cluster, I suggest that as a Linux cluster, so that everyone know is the LVS environment, if the above statement or misconfiguration, please inform 51CTO editor or author Fuqin cooking wine [email protected], we will correct in the first time, so as not to mislead readers.

"51cto.com exclusive feature, non-authorized declined reprint, the cooperation media reprint please indicate the original author and source!" 】

"Turn" Lvs/nginx how to deal with session problems

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.