Build an LVS Load Balancing environment (keepalived + LVS + nginx)

Source: Internet
Author: User
Tags reflector


LVS introduction:


The LVS cluster has three configuration modes: DR, Tun, and Nat. You can perform Load Balancing for WWW Service, FTP service, and mail service. below, you can build a load balancing instance for WWW Service, DRDs-based LVS cluster configuration


Director-server: core server of LVS, which is similar to a vro and contains a route table completed by the LVS function. User requests are distributed to the Application Server (real_server) at the server group layer through the route table ), real-servers monitoring

When real-server is unavailable, remove it from the LVS routing table, and then re-add it when restoring it.


Real-server: one or more of the web server, mail server, FTP server, DNS server, and video server. Each real-server is connected through LAN distribution or WAN distribution. In the actual process, Dr can also be concurrently used as the real-Server


Three Load Balancing Methods of LVS:


NAT: The scheduler changes the request's target address and target port to the real-server address and port, and then sends it to the selected real-server, when the real-server side returns data to the user, it needs to change the source address and source port of the message to the virtual IP address and port through Dr again, and then send the data to the user, complete the load scheduling process.

Disadvantages: high load on the Scheduler


Tun: in the IP tunneling mode, the scheduler forwards requests to Real-server through the IP tunnel, while real-server directly responds to user requests without passing through the scheduler. D and R can be different networks. In the Tun mode, the Scheduler only processes user packet requests and increases throughput.

Disadvantages: overhead of IP tunneling


DR: the direct routing technology is used to implement a virtual server. Dr uses the MAC address of the request to send the request to the real-server. The real-server directly responds to the client, eliminating the need for tunneling. Among the three methods, the best effect is.

Disadvantages: requires that D and R be in the same physical network segment


LVS load scheduling method:


LVS dynamically selects the real-server response based on the load of the Real-server. ipvs implements eight load scheduling algorithms. Four scheduling algorithms are described here:


RR is called scheduling:

Equal distribution of R, regardless of load.


WRR Weighted Round scheduling:

Set high/low weights and assign them to R.


Minimum LC connection scheduling:

Dynamically assigned to R with few established connections.


Wlc weighted least connection scheduling:

Dynamically set the weight of R. When allocating new connection requests, try to make the established connection of R directly proportional to the weight of R.




Environment Introduction:

This example uses three hosts, one ctor-server (scheduling server) and two web real_server (Web Server)


Real IP address of DS: 10.2.16.250

VIP: 10.2.16.252

Real IP address of RealServer -- 1: 10.2.16.253

Real IP address of RealServer -- 2: 10.2.16.254


Note: In this example, the Dr mode of LVS is used, and RR Round Robin is used for load balancing.


Install and configure LVS using keepalived


1. Install keepalived


[[Email protected] ~] # Tar-zxvf keepalived-1.2.13.tar.gz-C ./

[[Email protected] ~] # Cd keepalived-1.2.13

[[Email protected] keepalived-1.2.13] #./configure -- sysconf =/etc/-- With-kernel-Dir =/usr/src/kernels/2.6.32-358. el6.x86 _ 64/

[[Email protected] keepalived-1.2.13] # Make & make install

[[Email protected] keepalived-1.2.13] # ln/usr/local/sbin/keepalived/sbin/

 

2. Install LVS


Yum-y install ipvsadm *

 

Enable route forwarding:


[[Email protected] ~] # Vim/etc/sysctl. conf

Net. ipv4.ip _ forward = 1

[[Email protected] ~] # Sysctl-P

 

3. Configure keepalived and LVS on the scheduling server.


[[Email protected] ~] # Cat/etc/keepalived. conf


! Configuration file for keepalived


Global_defs {

Notification_email {

[Email protected]

}

Notification_email_from [email protected]

Smtp_server 127.0.0.1

Smtp_connect_timeout 30

Router_id lvs_devel

}


Vrrp_instance vi_1 {

State master

Interface eth0 # LVS physical Nic

Virtual_router_id 51

Priority100

Advert_int 1

Authentication {

Auth_type pass

Auth_pass 1111

}

Virtual_ipaddress {# LVS VIP

10.2.16.252

}

}


Virtual_server 10.2.16.252 80 {# define the VIP and port of The LVS instance that provides external services

Delay_loop 6 # Set the runtime check time in seconds

Lb_algo RR # sets the Load Scheduling Algorithm and RR as the round robin scheduling.

Lb_kind Dr # Set LVS load balancing mechanism, NAT/TUN/DR three modes

Nat_mask 255.255.255.0

# Persistence_timeout 50 # session persistence time in seconds, which is useful for session sharing on dynamic web pages

Protocol TCP # specify the forwarding protocol type


Real_server 10.2.16.253 80 {# specify the real IP address and port of RealServer

Weight 1 # Set the weight value. A greater number indicates a higher probability of allocation.

Tcp_check {# RealServer status detection section

Connect_timeout 3 # No response timeout in 3 seconds

Nb_get_retry 3 # Number of Retries

Delay_before_retry 3 # Retry Interval

}

}

Real_server 10.2.16.254 80 {# configure Service Node 2

Weight 1

Tcp_check {

Connect_timeout 3

Nb_get_retry 3

Delay_before_retry 3

}

}

}

 

 

4. Configure real_server


Due to the use of Dr scheduling, real_server will directly reply to the client using the lvs vip. Therefore, you need to enable the lvs vip on the lo of real_server to establish communication with the client.

1. Here we write a script to implement the VIP function:

[[Email protected] ~] # Cat/etc/init. d/lvsrs


#! /Bin/bash

# Description: Start Real Server

VIP = 10.2.16.252

./Etc/rc. d/init. d/functions

Case "$1" in

Start)

Echo "Start LVS of Real Server"

/Sbin/ifconfig lo: 0 $ VIP broadcast $ VIP netmask 255.255.255.255 up

/Sbin/route add-host $ VIP Dev lo: 0

Echo "1">/proc/sys/NET/IPv4/CONF/LO/arp_ignore # Note: These four sentences are used to disable ARP broadcast responses, prevents virtual IP addresses from sending broadcasts to the network to prevent network confusion.

Echo "2">/proc/sys/NET/IPv4/CONF/LO/arp_announce

Echo "1">/proc/sys/NET/IPv4/CONF/All/arp_ignore

Echo "2">/proc/sys/NET/IPv4/CONF/All/arp_announce

;;

Stop)

/Sbin/ifconfig lo: 0 down

Echo "Close LVS Director server"

Echo "0">/proc/sys/NET/IPv4/CONF/LO/arp_ignore

Echo "0">/proc/sys/NET/IPv4/CONF/LO/arp_announce

Echo "0">/proc/sys/NET/IPv4/CONF/All/arp_ignore

Echo "0">/proc/sys/NET/IPv4/CONF/All/arp_announce

;;

*)

Echo "Usage: $0 {START | stop }"

Exit 1

Esac

 

2. Start the script:


[[Email protected] ~] # Service lvsrs start

Start LVS of Real Server

3. view the Lo: 0 virtual Nic IP Address:


[[Email protected] ~] # Ifconfig

Eth0 link encap: Ethernet hwaddr 00: 0C: 29: A2: C4: 9f

Inet ADDR: 10.2.16.253 bcast: 10.2.16.255 mask: 255.255.255.0

Inet6 ADDR: fe80: 20c: 29ff: fea2: c49f/64 scope: Link

Up broadcast running Multicast MTU: 1500 Metric: 1

RX packets: 365834 errors: 0 dropped: 0 overruns: 0 frame: 0

TX packets: 43393 errors: 0 dropped: 0 overruns: 0 carrier: 0

Collisions: 0 FIG: 1000

RX Bytes: 33998241 (32.4 MIB) TX Bytes: 4007256 (3.8 MIB)


Lo link encap: local loopback

Inet ADDR: 127.0.0.1 mask: 255.0.0.0

Inet6 ADDR: 1/128 scope: Host

Up loopback running MTU: 16436 Metric: 1

RX packets: 17 errors: 0 dropped: 0 overruns: 0 frame: 0

TX packets: 17 errors: 0 dropped: 0 overruns: 0 carrier: 0

Collisions: 0 txqueuelen: 0

RX Bytes: 1482 (1.4 kib) TX Bytes: 1482 (1.4 kib)


Lo: 0 link encap: local loopback

Inet ADDR: 10.2.16.252 mask: 255.255.255.255

Up loopback running MTU: 16436 Metric: 1


4. Ensure normal nginx access

[[Email protected] ~] # Netstat-anptul

Active Internet connections (servers and established)

PROTO Recv-Q send-Q local address foreign address State PID/program name

TCP 0 0 0.0.0.0: 80 0.0.0.0: * Listen 1024/nginx


5. perform the same 4 steps on real_server2.


6. Enable keepalived on Dr:


[[Email protected] ~] # Service keepalived start

Starting keepalived: [OK]


Check whether the keepalived startup log is normal:

[[Email protected] ~] # Tail-F/var/log/messeges

May 24 10:06:57 proxy keepalived [2767]: Starting keepalived v1.2.13 (05/24, 2014)

May 24 10:06:57 proxy keepalived [2768]: Starting healthcheck child process, pid = 2770

May 24 10:06:57 proxy keepalived [2768]: Starting vrrp child process, pid = 2771

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: Netlink reflector reports IP 10.20..250 added

May 24 10:06:57 proxy keepalived_vrrp [2771]: Netlink reflector reports IP 10.2.16.250 added

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: Netlink reflector reports IP fe80: 20c: 29ff: fee6: ce1a added

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: Registering kernel Netlink Reflector

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: Registering kernel Netlink Command Channel

May 24 10:06:57 proxy keepalived_vrrp [2771]: Netlink reflector reports IP fe80: 20c: 29ff: fee6: ce1a added

May 24 10:06:57 proxy keepalived_vrrp [2771]: Registering kernel Netlink Reflector

May 24 10:06:57 proxy keepalived_vrrp [2771]: Registering kernel Netlink Command Channel

May 24 10:06:57 proxy keepalived_vrrp [2771]: Registering gratuitous ARP shared channel

May 24 10:06:57 proxy keepalived_vrrp [2771]: Opening File '/etc/keepalived. conf '.

May 24 10:06:57 proxy keepalived_vrrp [2771]: configuration is using: 63303 bytes

May 24 10:06:57 proxy keepalived_vrrp [2771]: Using linkwatch kernel Netlink reflector...

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: Opening File '/etc/keepalived. conf '.

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: configuration is using: 14558 bytes

May 24 10:06:57 proxy keepalived_vrrp [2771]: vrrp sockpool: [ifindex (2), proto (112), Unicast (0), FD ()]

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: Using linkwatch kernel Netlink reflector...

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: Activating healthchecker for service [10.20..253]: 80

May 24 10:06:57 proxy keepalived_healthcheckers [2770]: Activating healthchecker for service [10.20..254]: 80

May 24 10:06:58 proxy keepalived_vrrp [2771]: vrrp_instance (vi_1) Transition to master state

May 24 10:06:59 proxy keepalived_vrrp [2771]: vrrp_instance (vi_1) entering master state

May 24 10:06:59 proxy keepalived_vrrp [2771]: vrrp_instance (vi_1) setting protocol VIPs.

May 24 10:06:59 proxy keepalived_vrrp [2771]: vrrp_instance (vi_1) Sending gratuitous Arps on eth0 for 10.20..252

May 24 10:06:59 proxy keepalived_healthcheckers [2770]: Netlink reflector reports IP 10.2.16.252 added

May 24 10:07:04 proxy keepalived_vrrp [2771]: vrrp_instance (vi_1) Sending gratuitous Arps on eth0 for 10.20..252


Everything is normal!

 

7. view the route table of LVS:

 

[[Email protected] ~] # Ipvsadm-ln

IP Virtual Server version 1.2.1 (size = 4096)

Prot localaddress: Port sched1_flags

-> Remoteaddress: port forward weight activeconn inactconn

TCP 10.2.16.252: 80 rr

-> 10.2.16.253: 80 Route 1 0 0

-> 10.2.16.254: 80 Route 1 0 0


8. Test. Open the webpage and enter http: // 10.20..252/

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/44/8B/wKioL1Ph6ROB7_AAAACdSjS-s0o200.jpg "Title =" qq20140806163100.png "alt =" wKioL1Ph6ROB7_AAAACdSjS-s0o200.jpg "/>

If the webpage of two Server Load balancer servers Appears normally, it indicates that the Server Load balancer is successful!



9. Test whether one of the real-server services fails.

(1) Kill the nginx process of 254 and enable it again.

(2) view keepalived logs:

[[Email protected] ~] # Tail-F/var/log/messeges

 

May 24 10:10:55 proxy keepalived_healthcheckers [2770]: TCP connection to [10.20..254]: 80 failed !!!

May 24 10:10:55 proxy keepalived_healthcheckers [2770]: removing service [10.20..254]: 80 from vs [10.20..252]: 80

May 24 10:10:55 proxy keepalived_healthcheckers [2770]: remote SMTP server [127.0.0.1]: 25 connected.

May 24 10:10:55 proxy keepalived_healthcheckers [2770]: SMTP alert successfully sent.

May 24 10:11:43 proxy keepalived_healthcheckers [2770]: TCP connection to [10.20..254]: 80 success.

May 24 10:11:43 proxy keepalived_healthcheckers [2770]: adding service [10.20..254]: 80 to vs [10.20..252]: 80

May 24 10:11:43 proxy keepalived_healthcheckers [2770]: remote SMTP server [127.0.0.1]: 25 connected.

May 24 10:11:43 proxy keepalived_healthcheckers [2770]: SMTP alert successfully sent.

 

It can be seen that the keepalive response speed is still very fast!

 

At this point, the LVS configuration is complete and successful!

 


This article is from the "fate" blog, please be sure to keep this source http://czybl.blog.51cto.com/4283444/1536474

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.