Architecture Design: Load Balancing layer Design (5)--lvs Single-node installation

Source: Internet
Author: User

1. Overview

In the previous article, "Architecture Design: Load balancer layer Design (4)--lvs principle" (http://blog.csdn.net/yinwenjie/article/details/46845997), we introduced the working mode of LVS, and the specific work processes of each model. In this article, we'll show you how to install a single LVS node. Compared to the previous article, this piece to mention the installation and configuration is very simple, as long as you understand the principle, practice is calm things.

You can use VMware virtual machines on your computer and follow the steps described below to step through the process. We will use two virtual machines, one as the LVS node, and the other one with Nginx installed as the real server node.

2, Lvs-nat mode installation 2.1, preparation work--lvs Server:

LVS SERVER:LSV Server has two net cards.

    • eth0:192.168.100.10: This network card corresponds to a closed intranet, can not access the external network resources, the external network can not directly access the host through this IP
    • eth1:192.168.220.100: The IP of this network card can access the external network, or it can be accessed by the external network. Gateway for eth1:192.168.220.1.

      以下是设置的eth0    ip信息,[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE="eth0"BOOTPROTO="static"HWADDR="00:0C:29:3E:4A:4F"ONBOOT="yes"TYPE="Ethernet"IPADDR="192.168.100.10"NETMASK="255.255.255.0"====================================以下是设置的eth1  ip信息[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1DEVICE="eth1"BOOTPROTO="static"HWADDR="00:0C:29:3E:4A:59"ONBOOT="yes"TYPE="Ethernet"IPADDR="192.168.220.100"NETMASK="255.255.255.0"GATEWAY="192.168.220.1"

Remember to restart the Network service when Setup is complete:

[[email protected] ~]# service network restart

Ping Ping is healthier (indicating that network management work is normal):

[[email protected] ~]# ping 192.168.220.1PING 192.168.220.1 (192.168.220.1) 56(84) bytes of data.64 bytes from 192.168.220.1: icmp_seq=1 ttl=128 time=0.447 ms64 bytes from 192.168.220.1: icmp_seq=2 ttl=128 time=0.154 ms

You can also check through the route command:

[[email protected]1 ~]# routeKernel IP routing tableDestination     Gateway         Genmask         Flags Metric Ref    Use Iface192.168.100.0   *               255.255.255.0   U     1      0        0 eth0192.168.220.0   *               255.255.255.0   U     1      0        0 eth1default         192.168.220.1   0.0.0.0         UG    0      0        0 eth1

Note that the route table has a eth1 of the default information, pointing to 192.168.220.1. Indicates that the routing configuration is correct.

2.2. Preparing for work--real Server:

Real Server:real Server has a network card in a closed intranet environment.

    • eth0:192.168.100.11: This allows the LVS server and real server to form a relatively closed local area network. Note According to the NAT principle we introduced, the default gateway for Real Server eth0 is set to LVs server:192.168.100.10.

    • An nginx program was run on the real server, on port 80. This allows the test Lvs-nat to work properly during the subsequent process.

The following is the IP information for the real Server eth0 that is set up:

[[email protected] ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0"BOOTPROTO="static"HWADDR="00:0C:29:45:04:32"ONBOOT="yes"TYPE="Ethernet"IPADDR=192.168.100.11NETMASK=255.255.255.0GATEWAY="192.168.100.10"

Be sure to note that the network management of real server is set to the ip:192.168.100.10 of LVS. Next, see if Nginx is working properly:

Of course, the same ping ping is healthier:

[[email protected] ~]# ping 192.168.100.10PING 192.168.100.10 (192.168.100.10) 56(84) bytes of data.64 bytes from 192.168.100.10: icmp_seq=1 ttl=64 time=0.259 ms64 bytes from 192.168.100.10: icmp_seq=2 ttl=64 time=0.215 ms64 bytes from 192.168.100.10: icmp_seq=3 ttl=64 time=0.227 ms

Another way to check is through the route command:

[[email protected] ~]# routeKernel IP routing tableDestination     Gateway         Genmask         Flags Metric Ref    Use Iface192.168.100.0   *               255.255.255.0   U     1      0        0 eth0default         192.168.100.10  0.0.0.0         UG    0      0        0 eth0

Note that the default route points to 192.168.100.10.
Once the preparation is complete, we can start installing and configuring the LVS.

2.3. Start installing and configuring the Lvs-nat mode:

IPVSADM is a management program for LVS. The configuration of our team LVS is implemented through this hypervisor. First we want to install Ipvsadm:

yum -y install ipvsadm

Then start the configuration. First we will set up the LVS machine to support the IP forwarding function. Note The default IP forwarding feature is turned off, and when the machine is restarted, it shuts down:

[[email protected] ~]# echo 1 >> /proc/sys/net/ipv4/ip_forward

Then we'll check to see if the rewrite succeeds:

[[email protected] ~]# cat /proc/sys/net/ipv4/ip_forward 1

Note that if you use the VIM or VI command, overwriting the file will not succeed. Because this file exists in memory. Not on the hard drive. So it can only be rewritten with the command echo.

Next, execute the following command:

[[email protected] ~]# ipvsadm -At 192.168.220.100:80 -s rr[[email protected] ~]# ipvsadm -at 192.168.220.100:80 -r 192.168.100.11 -m

Let's explain the parameters:

-A --add-service 在内核的虚拟服务器表中添加一条新的虚拟服务器记录。也就是增加一台新的虚拟服务器。-t --tcp-service service-address 说明虚拟服务器提供的是tcp 的服务。-s --scheduler scheduler 使用的调度算法,可选项包括:rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq(关于调度算法我们在上篇文章中已经详细介绍了)-r --real-server server-address 真实的服务器[Real-Server:port]。-m --masquerading 指定LVS 的工作模式为NAT 模式。

Of course, Ipvsadm also includes a lot of parameters, after slowly speaking, and then the last part of this article, will give a more complete parameter summary. Finally we test:

We have access to the Nginx service on the real server via the 192.168.220.100 LVS server IP in relation to the real server's extranet. Installation and configuration succeeded.

2.4. Instructions on Iptables and restart service

During the configuration of Lva-nat, it is recommended to turn off the LVS and real server firewall services. This avoids unnecessary errors and increases the chance of a single configuration success. However, in the normal production environment, the LVS Firewall is best to open depending on the actual situation.

Note that the information that you just configured with IPVSADM will fail after the LVS server restarts. Includes the configuration of the Ip_forward . Therefore, it is best to make a script file and add it to/etc/profile:

vim /usr/lvsshell.sh#!/bin/bashecho 1 > /proc/sys/net/ipv4/ip_forward ipvsadm -Cipvsadm -At 192.168.220.100:80 -s rripvsadm -at 192.168.220.100:80 -r 192.168.100.11 -m
3, LVS-DR mode installation 3.1, ready to work--lvs Server

In order to let you know the different ways of LVS, this time we use the VIP way, instead of two net cards (of course you can also use two net card way). The VIP way is the usual way that we will talk about the LVS + keepalived combined working mode. The so-called VIP is dead virtual IP, refers to this IP will not be fixed bundled to a certain network card device, but through the ifconfig command binding, and in the "appropriate time" this binding relationship will change:

dip:192.168.220.137
vip:192.168.220.100

First we look at the IP information on the LVS host before the VIP is set:

[[email protected] ~]# ifconfig eth1      Link encap:Ethernet  HWaddr 00:0C:29:3E:4A:59            inet addr:192.168.220.137  Bcast:192.168.220.255  Mask:255.255.255.0          inet6 addr: fe80::20c:29ff:fe3e:4a59/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:2612 errors:0 dropped:0 overruns:0 frame:0          TX packets:117 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000           RX bytes:159165 (155.4 KiB)  TX bytes:8761 (8.5 KiB)lo        Link encap:Local Loopback            inet addr:127.0.0.1  Mask:255.0.0.0          inet6 addr: ::1/128 Scope:Host          UP LOOPBACK RUNNING  MTU:16436  Metric:1          RX packets:210 errors:0 dropped:0 overruns:0 frame:0          TX packets:210 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:0           RX bytes:16944 (16.5 KiB)  TX bytes:16944 (16.5 KiB)

Then we'll set up the VIP information:

[[email protected] ~]# ifconfig eth1:0 192.168.220.100 broadcast 192.168.220.100 netmask 255.255.255.255 up[[email protected] ~]# route add -host 192.168.220.100 dev eth1:0

The routing table will have new routing information:

[[email protected] ~]# routeKernel IP routing tableDestination     Gateway         Genmask         Flags Metric Ref    Use Iface192.168.220.100 *               255.255.255.255 UH    0      0        0 eth1192.168.100.0   *               255.255.255.0   U     1      0        0 eth0192.168.220.0   *               255.255.255.0   U     1      0        0 eth1default         192.168.220.1   0.0.0.0         UG    0      0        0 eth1

At this point, through an external network IP, you can ping this VIP (the following DOS system is the VM machine):

C:\Users\yinwenjie>  ping 192.168.220.100正在 Ping 192.168.220.100 具有 32 字节的数据:来自 192.168.220.100 的回复: 字节=32 时间<1ms TTL=64来自 192.168.220.100 的回复: 字节=32 时间<1ms TTL=64

Above, we have completed the preparation of the LVS host Setup lvs-dr working mode. Attention:

    • In your setup process, it is best to shut down before fire, but in the formal production environment, LVS fire resistance is best open.
    • VIP information will disappear after the LVS host restarts. So you'd better make a script that sets the VIP's command.
3.2. Preparing for work--real Server

rip:192.168.220.132

The preparation of the real server requires that the real server be able to access the extranet gateway and that the messages rewritten by LVS can be processed by the real server (see my previous blog post on the LVs principle), which requires a loopback IP. First we look at the IP information before we start setting up:

[[email protected] ~]# ifconfig eth0      Link encap:Ethernet  HWaddr 00:0C:29:FC:91:FC            inet addr:192.168.220.132  Bcast:192.168.220.255  Mask:255.255.255.0          inet6 addr: fe80::20c:29ff:fefc:91fc/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:2384 errors:0 dropped:0 overruns:0 frame:0          TX packets:1564 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000           RX bytes:1551652 (1.4 MiB)  TX bytes:144642 (141.2 KiB)lo        Link encap:Local Loopback            inet addr:127.0.0.1  Mask:255.0.0.0          inet6 addr: ::1/128 Scope:Host          UP LOOPBACK RUNNING  MTU:65536  Metric:1          RX packets:44 errors:0 dropped:0 overruns:0 frame:0          TX packets:44 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0           RX bytes:3361 (3.2 KiB)  TX bytes:3361 (3.2 KiB)

Use the route command to view the original route information:

[[email protected] ~]# routeKernel IP routing tableDestination     Gateway         Genmask         Flags Metric Ref    Use Iface192.168.220.0   *               255.255.255.0   U     1      0        0 eth0default         192.168.220.1   0.0.0.0         UG    0      0        0 eth0

In addition, there is an nginx on this real server to ensure that we observe the operation of the LVS-DR. We use 192.168.220.132 this IP in the external network, can access to the Nginx page:

Next, we start setting up the loopback IP on the real server, first turn off the function of the machine for ARP query, otherwise, the real The server queries the router or switch for the MAC address of the IP 192.168.220.100 (note that the following information will be restored):

echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announce

You can then set the loopback IP:

[[email protected] ~]# ifconfig lo:0 192.168.220.100 netmask 255.255.255.0 up[[email protected] ~]# route add -host 192.168.220.100 dev lo:0

Check for new routing information:

[[email protected] ~]# route Kernel IP routing tableDestination     Gateway         Genmask         Flags Metric Ref    Use Iface192.168.220.100 *               255.255.255.255 UH    0      0        0 lo192.168.220.0   *               255.255.255.0   U     1      0        0 eth0default         192.168.220.1   0.0.0.0         UG    0      0        0 eth0

The above settings are complete. when you are done, you will need to ping the command to check if the gateway is available (you might want to ping the address of the extranet, for example, 163.com). In LVS-DR mode, Real server returns results directly to the requester, so be sure to make sure that the gateway is available.

3.3. Start installing and configuring LVS-DR mode

After verifying that both the LVS server and the real server are ready, you can set up the LVS-DR mode. The installation of LVS management software Ipvsadm We are not covered, in the Lvs-nat summary of the LVS management software installation of the introduction.

[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/ip_forward[[email protected] ~]# cat /proc/sys/net/ipv4/ip_forward1[[email protected] ~]# ipvsadm -C[[email protected] ~]# ipvsadm -At 192.168.220.100:81 -s rr [[email protected] ~]# ipvsadm -at 192.168.220.100:81 -r 192.168.220.132 -g

Introduce the newly emerged parameters:

    • -g–gatewaying the operating mode of the specified LVS is the direct route mode Dr Mode (also the LVS default mode)

Configuration completed, is not very simple. Next, we can access the Nginx service on the 132 real server by 192.168.220.100 this IP on the external network:

4, Lvs-tun mode installation

Lvs-tun mode installation I'm not going to spend a lot of time talking about it, after you know the difference between Dr and Tun in the previous article, the process of its configuration is almost the same, but one can cross the subnet, one can't. So Lvs-tun mode installation and configuration, please refer to LVS-DR mode first.

5, Ipvsadm parameter summary

-a–add-service adds a new virtual server record to the Virtual server table in the kernel. That is, add a new virtual server.

-e–edit-service edit a virtual server record in the Kernel Virtual server table.

-d–delete-service Deletes a virtual server record from the kernel Virtual server table.

-c–clear clears all records in the kernel Virtual server table.

-r–restore Recovering Virtual Server rules

-s–save Save Virtual Server rule, output to-r option readable format

-a–add-server adds a new real-world server record to a record in the Kernel Virtual server table. That is, adding a new real server to a virtual server

-e–edit-server edit a real server record in a virtual server record

-d–delete-server Delete a real server record in a virtual server record

-l–list displaying the kernel Virtual server table

-z–zero Virtual Service Table counter 0 (empty the current number of connections, etc.)

–set TCP tcpfin UDP setting Connection timeout value

–start-daemon initiates the synchronization daemon. He can be followed by master or backup to indicate that the LVS Router are master or backup. Keepalived's VRRP function can also be used in this function.

–stop-daemon Stop the synchronization daemon

-t–tcp-service service-address indicates that the virtual server provides a service for TCP [Vip:port] or [Real-server-ip:port]

-u–udp-service service-address indicates that the virtual server is providing UDP services [vip:port] or [Real-server-ip:port]

-f–fwmark-service Fwmark Description is a service type that has been marked iptables.

-s–scheduler Scheduler uses the scheduling algorithm, options: RR|WRR|LC|WLC|LBLC|LBLCR|DH|SH|SED|NQ, the default scheduling algorithm is: WLC.

-p–persistent [Timeout] a solid service. This option means that multiple requests from the same customer will be processed by the same real server. The default value for timeout is 300 seconds.

-m–netmask netmask Persistent Granularity mask

-r–real-server server-address Real Server [Real-server:port]

-g–gatewaying the operating mode of the specified LVS is the direct route mode Dr Mode (also the LVS default mode)

-I–IPIP the operating mode of the specified LVS is tunnel mode

-m–masquerading specifies that the LVS work mode is NAT mode

-w–weight Weight Real Server weights

–mcast-interface interface Specifies the multicast synchronization interface

-C

–connection display LVS current connection such as: Ipvsadm-l-C

–timeout shows the timeout value for TCP Tcpfin UDP such as: ipvsadm-l–timeout

–daemon Show Synchronization daemon Status

–stats displaying statistics

–rate Display rate Information

–sort sorting the virtual server and the real server output

–numeric-n the digital form of the output IP address and port

Well, this is what I copied on the Internet.

6, the following article introduction

the LVS setting is actually not difficult, at best a high-level gadget, a specific method of solving a particular problem . The basic knowledge you need involves the basics of IP protocol, subnet segmentation, IP mapping, Linux scripting, and more. It is your fundamental to get through the two veins of the governor.

The LVS setup process will certainly encounter practical problems, especially your first few configurations. Do not panic, do not fear, through the phenomenon of guessing the nature, one of the problem solving. In my experience, there are only a few categories of questions:

    • The gateway is not through
    • Loopback IP Setup Issues
    • Firewall issues
    • VIP Setup Issues
    • Network segment or subnet problem

Through these articles, including this blog, you have learned the functions, principles, features and working modes of Nginx and LVS respectively. In the following article, we will combine these load layer technologies to introduce the installation and configuration of Nginx + keepalived, LVs + keepalived, LVs + keepalived + nginx.

(I this Month "Perseverance" award has been received, laugh one: haha ha)

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Architecture Design: Load Balancing layer Design (5)--lvs Single-node installation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.