Linux Load Balancer configuration keepalive LVS uses Nginx forwarding CentOS7 to build lvs+keepalived load Balancer

Source: Internet
Author: User

Recently want to be able to configure load balancing, in the virtual machine above, but the Internet to find a lot of information is fragmented, for people who do not know, a lot of not enough detail, recently finally done, the specific steps to write down, convenient for you to read the study

This experiment requires the installation of Nginx if not installed, please refer to:

Linux installation nginx:http://www.cnblogs.com/aspirant/p/6714548.html

(1) Our system is CentOS7 currently need four machines, respectively, director machine two: Master,slave

Real machine realserver1,realserver2; The system architecture diagram is as follows:



- system used: CentOS7
- Real Web server (RS1): 192.168.137.5
- Real Web server (RS2): 192.168.137.6
- Master Load Balancer Server: 192.168.137.101
- Backup Load balancer server: 192.168.137.102
- System external virtual ip:192.168.137.100

As can be seen from the architecture, from the user's point of view, will directly access the 192.168.137.100, that is, regardless of how the system is designed to ensure the availability of this IP,

Consider from an architect's perspective:

User Access 192. 168.137.100-bit vip:virtual IP, virtual IP, users do not care how the internal coordination, we use

192.168.137.101 as the host Master machine and then configure HA (high avilable) with keepalived technology, which means that if the distributed machine Master goes down, KeepAlive will automatically go to Backup machine,

This is the HA configuration, to ensure that master even if the outage, but also does not affect the forwarding;

Master machine is responsible for forwarding the user's request to the real machine, RS1 and RS2, they will follow a certain rotation mechanism, access, if RS1 downtime, Master will automatically forward to RS2;

We configured keepalived on two load-balanced machines to ensure high-availability ha for the distribution machine;

The IP configuration of the system is as follows:

Server name IP Address Virtual device Name Virtual IP
Director Server 192.168.137.101 ens33:0 192.168.137.100
Backup Server 192.168.137.102 ens33:0 192.168.137.100
Real Server1 192.168.137.5 lo:0 192.168.137.100
Real Server2 192.168.137.6 lo:0 192.168.137.100

Second, the installation and configuration process of keepalived

Both load balancer servers need to be installed, configured keepalived

2.1 Installing Keepalived
$ yum-y Install keepalived
2.2 Configuring Keepalived
$ vim/etc/keepalived/keepalived.conf

The configuration information is as follows

! Configuration File for Keepalivedglobal_defs {notification_email {[email protected] #设置报警邮件地址, you can set more than one,        One per line. [email protected] #需开启本机的sendmail服务 [email protected]} Notification_email_from [Email prot Ected] #设置邮件的发送地址 smtp_server 127.0.0.1 #设置smtp server address smtp_connect_timeout #设置连接smtp server time-out R outer_id Lvs_devel #表示运行keepalived服务器的一个标识. Message displayed in message subject information}vrrp_instance vi_1 {State Master #指定keepalived的角色, master indicates that this host is the primary server, and backup indicates that this host is a standby server in Terface ens33 #指定HA监测网络的接口 virtual_router_id #虚拟路由标识, this identity is a number that uses a unique identity for the same VRRP instance. That is, the same vrrp_instance, master and backup must be a consistent priority #定义优先级, the higher the number, the higher the precedence, under the same vrrp_instance, master priority must be greater than back Up priority Advert_int 1 #设定MASTER与BACKUP负载均衡器之间同步检查的时间间隔, in seconds authentication {#设置验证类型和密码 A Uth_type Pass #设置验证类型, mainly have pass and ah two kinds of auth_pass 1111 #设置验证密码, under the same vrrp_instance, master and BACKup must use the same password to communicate properly} virtual_ipaddress {#设置虚拟IP地址, you can set multiple virtual IP addresses, one 192.168.137.100 per line}}virtual_serv                 Er 192.168.137.100 {#设置虚拟服务器, you need to specify a virtual IP address and a service port, separated by a space between IP and port Delay_loop 6 #设置运行情况检查时间, in seconds Lb_algo RR #设置负载调度算法, this is set to RR, the polling algorithm Lb_kind DR #设置LVS实现负载均衡的机制, with Nat, TUN, DR three modes selectable Nat_mask 255.255 .255.0 persistence_timeout 0 #会话保持时间, unit is seconds.                              This option is useful for dynamic Web pages and provides a good solution for session sharing in a clustered system.                              #有了这个会话保持功能, the user's request is distributed to a service node until it exceeds the hold time of the session. #需要注意的是, this session hold time is the maximum unresponsive timeout, that is, when the user is working on a dynamic page, if no action is taken within 50 seconds #那么接下来的操作会被分发到另外的节点, but if the user has been working on the dynamic page , the protocol TCP #指定转发协议类型 is not limited to 50 seconds, there are TCP and UDP two real_server 192.168.137.5 {#配置服务节点1, you need to specify the real server's                              Real IP address and port, separated by a space between IP and port weight 1 #配置服务节点的权值, weight value size with a number, the larger the number, the higher the weight, the setting of the weight value can be different performance of the server #分配不同的负载, you can set higher weights for high-performance servers, and set relatively low weights for lower-performing servers in order to make reasonable use and distribution of the systemResource Tcp_check {#realserver的状态检测设置部分, in seconds connect_timeout 3 #表示3秒无响应超时 Nb_get_ret Ry 3 #表示重试次数 delay_before_retry 3 #表示重试间隔 connect_port 8066}} real_server 192. 168.137.6 {weight 1 Tcp_check {connect_timeout 3 nb_get_retry 3 Delay_ Before_retry 3 Connect_port 8066}}

  

Places to be aware of:
    • Interface Ens33: Here the ENP0S3 is the name of my network card, want to see the name of their network card, in the/ETC/SYSCONFIG/NETWORK-SCRIPTS/IFCFG-E (Knock tab) This each system may be different,

We can enter ifconfig, typically there will be two network cards, one is ENS33 and one is the loopback address lo, for the distribution machine master and backup we need to add ens33:0 NIC, for the real machine Realserver1,realserver2 , we need to add lo:0 NIC, next we can see this is different;

    • Persistence_timeout 0: means that connections from the same IP will be forwarded to the same realserver within a certain amount of time. Rather than a strict poll. The default is 50s, so it's a good idea to set the value to 0 when testing the load balancer for normal polling, so you can easily see
    • Tcp_check {: note Tck_check and {There is a space between, forget to play this space, may appear later with the Ipvsadm view, some RS can not see
    • You can simply copy the above code and put it on your machine, but don't forget to change the format, because we're using the window machine to copy it, so on Linux we need to convert to a script that Linux can use, Because the enter of window is different from the return of Linux, using Dos2unix conversion,
    • We use (a) Yum install Dos2unix
    • (b) Dos2unix file name
    • The conversion is over, you can use it;
Another standby server on the keepavlied configuration is similar, just change master to backup, set priority to lower than master, set to 90;

Keepalived 2 Nodes execute the following command to turn on the forwarding function:

Echo 1 >/proc/sys/net/ipv4/ip_forward

2.3 lo:0 bind VIP address on two RS, suppress ARP broadcast

Both RS machines are fitted with Nginx:

Linux installation nginx:http://www.cnblogs.com/aspirant/p/6714548.html

Then we access the directory in Nginx: Mine is this:/home/zkpk/nginx-1.8.0/html/index.html

RS1 index.html modified, so convenient test, such as input I was RS1;

RS2 index.html modified, so convenient test, such as input I was RS2;

Write the following script file on both Rs realserver.sh

#!/bin/bash#description:config REALSERVERVIP=192.168.137.100/etc/rc.d/init.d/functions Case " $" inchstart)/sbin/ifconfigLo0$VIP netmask255.255.255.255Broadcast $VIP/sbin/route add-host $VIP Dev lo:0       Echo "1">/proc/sys/net/ipv4/conf/lo/Arp_ignoreEcho "2">/proc/sys/net/ipv4/conf/lo/arp_announceEcho "1">/proc/sys/net/ipv4/conf/all/Arp_ignoreEcho "2">/proc/sys/net/ipv4/conf/all/arp_announce sysctl-P >/dev/NULL 2>&1       Echo "realserver Start OK"       ;; Stop)/sbin/ifconfigLo0 Down/sbin/route del $VIP >/dev/NULL 2>&1       Echo "0">/proc/sys/net/ipv4/conf/lo/Arp_ignoreEcho "0">/proc/sys/net/ipv4/conf/lo/arp_announceEcho "0">/proc/sys/net/ipv4/conf/all/Arp_ignoreEcho "0">/proc/sys/net/ipv4/conf/all/arp_announceEcho "realserver stoped"       ;;*)       Echo "Usage: $ {start|stop}"Exit1EsacExit0

Execute the script separately on both RS

sh realserver. SH start

2.4 Enable Keepavlied

$ service keepalived Start

 
Then through the ipvsadm-l command can see whether the VIP has been successfully mapped to two RS, if you find a problem, you can see the cause of the error through/var/log/message, note here 192.168.137:100 you may not, It's like localhost and nothing wrong.

Next you can test the availability of 2.4.1 Test load Balancing 2.4.2 Test keepalived monitoring detection

Stop RS1 Nginx, and then on the master Load Balancer Server can see the VIP mapping relationship has been removed 192.168.137.5
(1) Stop RS1 's Nginx
Kill process with pkill-9 Nginx

If you need to open Nginx use:/usr/local/nginx/sbin/nginx start nginx process;

Then Ps-ef |grep Nginx can see whether the nginx started successfully;

(2) Experimental test 1

Browser access to http://192.168.137.100 to see if the startup is successful, Nginx is displayed;

Experiment 2

Manually shut down 192.168.137.5 node Nginx,service Nginx stop on the client to test access http://192.168.137.100 results are normal, there will be no access failure situation,

Experiment 3

Manually re-open 192.168.137.5-node Nginx, service nginx start on the client to test access http://192.168.137.100 results are normal, according to the RR scheduling algorithm access to 5 nodes and 6 nodes.

Experiment 4

Test the HA feature of the keepalived, first execute the command IP addr on master, you can see 38 of the VIP on the master node, if the service keepalived Stop command is executed on master, At this time the VIP is no longer master, the slave node on the implementation of the IP addr command can see the VIP has been correctly floated to the slave node, then the client to access http://192.168.137.100 access is still normal, verify the The HA feature of the keepalived.



Resources:

Configuration by clause:

http://ixdba.blog.51cto.com/2895551/554799

http://blog.csdn.net/u012852986/article/details/52412174

Http://www.jb51.net/article/38368.htm

Http://www.cnblogs.com/llhua/p/4195330.html

Specific deployment:

Http://www.cnblogs.com/liwei0526vip/p/6370103.html

Nginx Start Stop:

Http://www.cnblogs.com/codingcloud/p/5095066.html

Linux Load Balancer configuration keepalive LVS uses Nginx forwarding CentOS7 to build lvs+keepalived load Balancer

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.