LVS + Keepalived implements load balancing and lvskeepalived

Source: Internet
Author: User

LVS + Keepalived implements load balancing and lvskeepalived

Lvs is an open-source project launched by Dr. Zhang Wenyu of the National Defense University to achieve load balancing between servers. Full name: linux virtual server

With the expansion of enterprises, the access to servers is growing. At this time, there are only two solutions for server load capabilities.

1: Expand the server's hardware configuration. Purchase expensive machines to meet increasing performance requirements.

2: Increase the number of servers to improve the server performance in a cluster.

The second method is best to use lvs for load balancing.

Three Modes of lvs Server Load balancer: NAT (network add translation), DR (Direct Route), and TUN

The first mode is NAT: This mode uses address translation for load balancing and works at the network layer. All requests are converted through lvs. The realserver gateway directs to The lvs server, and all responses are sent back by lvs. As a result, lvs is under great pressure and network bottlenecks may occur when the number of realservers reaches 10. All realservers and lvs are in the same intranet.

Mode 2: DR: This mode works on the data link layer. When ARP is performed, only the lvs server responds to it, and then the Gateway delivers the request to the lvs server, the lvs server then sends the scheduling algorithm to the realserver. In response to the request, the Lvs server does not need to directly respond to the client. This greatly increases the speed and reduces the pressure on the Lvs server.


Keepalived works on lvs. The so-called work is on which keepalived uses its own module to call the ipvsadm command through the configuration file to configure LVS for load balancing. Keepalived and LVS are completely different concepts. They are independent and independent to complete their work. lvs achieves high scalability for Servers through Server Load balancer, keepalived performs lvs health check through its Sub-processes to achieve high availability of LVS (that is, to prevent LVS from crashing and to send the health status of the master-slave server and notify the switching between master-slave lvs ).

Keepalived has three processes: VRRP sub-processes, healthcheck sub-processes, and WatchDog parent processes.

The VRRP sub-process is responsible for VRRP protocol and master-slave communication. healthcheck is responsible for checking the health status of LVS and HTTP,

WatchDog processes manage two child processes. Keepalived modules are relatively independent and can implement different complex functions.

The configuration module.



Detailed description of the keepalived. conf configuration file:

Global_defs {

Router_id 50 indicates the master-slave Region

.

Vrrp_instance vrrp_name {

State MASTER indicates the MASTER server

Interface eth0 monitoring at eth0

Virtual_router_id 50 Region ID

Priority 50 Master/Slave priority, Master is greater than slave

Advert_int 2: whether the master and slave nodes are working at intervals

Authentication {} master-slave mutual authentication Configuration

Virtual_server {

10.0.0.113 configure the VIP (virtual IP) of lvs)

}

} Used to configure the lvs Master/Slave server. vrrp multicast mode is used.

Virtual_server 10.0.0.114 3 80 {

The scheduling algorithm used by lb_algo rr is rr.

The load balancing mode used by lb_kind DR is DR.

Persistence_timeout 3 test the realserver survival timeout interval is 3 seconds

The protocol used for communication between TCP lvs and realserver is TCP.

Real_server 10.0.0.2 80 {

Weight of weight 1 Real Server Scheduling

TCP_CHECK {

Connect_timeout 3 connection timeout

Connect_port 80

Nb_get_retry 3

}

} Configure realserver

} Configure the corresponding virtual IP server

Check that the configuration file corresponding to the service and port in Linux is/etc/servers.

Set umask to use the umask command or in the configuration file/etc/profile.












For linux Clusters, nginx, lvs, and keepalived, how can we choose to achieve load balancing?

Khan, let's take a look at what nginx, lvs, and keepalived are doing.
The cluster does not connect machines to a cluster ..

What is the difference between lvs and keepalived?

Haha, this problem also plagued me for a while. I checked a lot of information this morning to find out: 1. ipvsadm (LVS) has a load balancing mechanism and currently supports the eight-center balancing algorithm. 2. In addition to health check, keepalived also supports load balancing. Currently, many of his health check functions are used, and most of his load balancing capabilities are ignored. 3. In the framework of lvs + keepalived, you can use keepalived to balance the load (the component of ipvsadm also needs to be installed to facilitate cluster management ).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.