18th Linux Cluster architecture

Source: Internet
Author: User
Tags curl prepare haproxy

18.1 Introduction to the collection group
Cluster overview
Divided into two categories based on functionality: High availability and load balancing.
1) A highly available cluster is typically two servers, one work, and the other as redundancy, and redundancy will take over as the service goes down.
Open source software for high availability: heartbeat, keepalived. The latter is useful, the former has not been updated for a long time.
2) Load Balancer Cluster, need to have a server as a dispatcher, it is responsible for the user's request distributed to the backend server processing, in this cluster, in addition to the Distributor, is to provide services to the user server, the number of these servers is at least 2.
The open source software for load Balancing has LVS, keepalived, Haproxy, Nginx, commercial F5, Netscaler18.2 keepalive Introduction
Here we use keepalived to implement a highly available cluster, because heartbeat has some problems on the CENTOS6 that affect the experimental results
Keepalived is highly available through VRRP (Virtual Router redundancy protocl).
In this protocol, multiple routers of the same function will be formed into a group that will have 1 master roles and N (n>=1) backup roles.
Master sends the VRRP protocol packets to each backup through multicast, and when backup does not receive the VRRP packets from master, it is considered master down. At this point, you need to decide who will be the new Mater based on the priority of each backup.
The keepalived will have three modules, core, check, and VRRP, respectively. The core module is keepalived, responsible for the start of the main process, maintenance and global configuration file loading and parsing, check module is responsible for health check, VRRP module is to implement VRRP protocol.

18.3 Configuring a highly available cluster with keepalived (top)
Configuring High Availability with KeepAlive
Prepare two machines 128 and 129 128 as master,129 as backup
Both machines perform Yum install-y keepalived
Nginx is equivalent to keepalived implementation of highly available objects.
Two machines are installed Nginx, of which 128 have been compiled installed on the nginx,129 need yum first install Nginx.
Yum-y install gcc gcc-c++ autoconf automake make//installing GCC compilers
Nginx needs to install Pcre first,
Https://sourceforge.net/projects/pcre/files/pcre/
RPM-QA |grep nginx//See if it is installed
Configuring the Master Master
1) Turn off firewall and SELinux
Iptables-f
Iptables-nvl
Setenforce 0
2) Edit Keepalived profile on master:128, set VIP to 100
Vim/etc/keepalived/keepalived.conf,

/etc/keepalived/keepalived.conf emptying the configuration file
Content obtained from https://coding.net/u/aminglinux/p/aminglinux-book/git/blob/master/D21Z/master_keepalived.conf
As follows:
Global_defs {
Notification_email {
[Email protected] #有问题时发邮件
}
Notification_email_from [email protected]
Smtp_server 127.0.0.1
Smtp_connect_timeout 30
router_id Lvs_devel
}
Vrrp_script Chk_nginx {
Script "/usr/local/sbin/check_ng.sh" #监控的脚本
Interval 3
}
Vrrp_instance Vi_1 {
State MASTER #主
Interface Ens33 #网卡名
virtual_router_id Wuyi #路由器的ID
Priority #主权重
Advert_int 1
Authentication {
Auth_type PASS #密码认证
Auth_pass aminglinux>com #认证的密码
}
virtual_ipaddress {
192.168.188.100# when the master has a problem, the main change from the public IP, to be within the IP add
}
Track_script {
Chk_nginx #加载脚本
}
}
3) Edit the monitoring script in master:128,
vim/usr/local/sbin/check_ng.sh
Content from Https://coding.net/u/aminglinux/p/aminglinux-book/git/blob/master/D21Z/master_checkng.sh Get
#!/bin/bash
#时间变量, for logging
D= ' Date--date today +%y%m%d
%h:%m:%s<br/>#计算nginx进程数量<br/>n=Ps-c Nginx--no-heading|wc-l<br/>#如果进程为0,则启动nginx,并且再次检测nginx进程数量,<br/>#如果还为0,说明nginx无法启动,此时需要关闭keepalived<br/>if [ $n -eq "0" ]; then<br/>/etc/init.d/nginx start<br/>n2=Ps-c nginx--no-heading|wc-l '
If [$n 2-eq "0"]; Then
echo "$d Nginx down,keepalived'll Stop" >>/var/log/check_ng.log
Systemctl Stop keepalived
Fi
Fi

Give script 755 permissions
chmod 755/usr/local/sbin/check_ng.sh
Systemctl Start keepalived Startup service
PS aux | grep keepalived to see if it started

Using/etc//init.d/nginx stop to stop discovery will automatically load Nginx,,,, unsuccessful???

q:keeplived configuration file inside the VIP designated network adapter needs to use IP add to view, the configuration file inside the interface change the name of the network card in the machine, I copied the card copy did not change the name, so the error
KEEPALIVED_VRRP[17516]: vrrp_instance (vi_1) Unknown Interface!
A: It's OK after the change.

18.4 Configuring a highly available cluster with keepalived (medium)
configuration from backup
1) shutting down the firewall and SELinux
iptables-f
Iptables-nvl
Setenforce 0
2 Edit the keepalived configuration file
vim/etc/keepalived/keepalived.conf on backup:129,

/etc/keepalived/keepalived.conf Empty configuration file
contents from https://coding.net/u/aminglinux/p/aminglinux-book/ Git/blob/master/d21z/backup_keepalived.conf get
Global_defs {
Notification_email {br/>[email protected] Notification_email_from [ Email protected]
smtp_server 127.0.0.1
smtp_connect_timeout
router_id lvs_devel
}
Vrrp_script Chk_nginx {
Script "/usr/local/sbin/check_ng.sh"
Interval 3
}
vrrp_instance vi_1 {
State BACKUP #从
Interface ens33
virtual_router_id
Priority
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass aminglinux>com
}
virtual_ipaddress {
192.168.188.100
}
Track_script {
Chk_ Nginx
}
}

3) Edit the monitoring script in backup:129,
vim/usr/local/sbin/check_ng.sh
Content obtained from Https://coding.net/u/aminglinux/p/aminglinux-book/git/blob/master/D21Z/backup_checkng.sh
#时间变量, for logging
D= ' Date--date today +%y%m%d
%h:%m:%s <br/>#计算nginx进程数量<br/>n= ps-c nginx--no-heading|wc-l <br/>#如果进程为0,则启动nginx,并且再次检测nginx进程数量,<br/>#如果还为0,说明nginx无法启动,此时需要关闭keepalived<br/>if [ $n -eq "0" ]; then<br/>systemctl start nginx<br/>n2= ps-c nginx--no-heading|wc-l '
If [$n 2-eq "0"]; Then
echo "$d Nginx down,keepalived'll Stop" >>/var/log/check_ng.log
Systemctl Stop keepalived
Fi
Fi
chmod 755/usr/local/sbin/check_ng.sh
Systemctl Start keepalived Startup service
PS aux | grep keepalived to see if it started

Test:
Access 128--129--100 in the browser
Discovery 100 access is accessed to master 128.

18.5 Configuring high-availability clusters with keepalived (bottom)
Test High Availability:
First determine the two machine on the nginx differences, such as can be curl-i to view the Nginx version
Test 1: Turn off the Nginx service on master to automatically start up the Nginx?? Not successful
Test 2: Add the Iptabls rule on master
Iptables-i output-p vrrp-j DROP
Test 3: Turn off the keepalived service on Master
Use IP add to view vip100 only on master, no 100 from top;
After you close the keepalived service on master, there is a 100 from the top.

The Web Access 192.168.188.100 is now accessed from the backup
Test 4: Start the keepalived service on Master
The Web Access 192.168.188.100 and the master Master

Mysql,nginx is the actual service, keepalived is only used to build a high-availability cluster for NGINX services to achieve high availability.

18.6 Load Balancing Cluster introduction
Main open source software LVs, keepalived, Haproxy, Nginx, etc.
The LVS belong to 4 layer (network OSI 7 layer model), Nginx belongs to 7 layer, Haproxy can be considered as 4 layer, can also be used as 7 layer
The Keepalived load balancing function is actually the LVS
LVS This 4-tier load balancer can distribute other ports except 80, such as MySQL, while Nginx supports only Http,https,mail,haproxy and MySQL.
In comparison, the 4-layer LVS is more stable, can withstand more requests, and nginx this 7-layer more flexible, to achieve more personalized requirements

18.7 LVS Introduction
LVS is developed by Chinese Zhangwensong
Popularity is no less than Apache httpd, TCP/IP-based routing and forwarding, high stability and efficiency
The latest version of LVS is based on Linux kernel 2.6 and has not been updated for many years
LVS has three common patterns: NAT, DR, IP Tunnel
A core role in the LVS architecture is called the Dispatcher (Load balance), which is used to distribute the user's requests, as well as many servers that handle user requests (Real server, RS)
1) LVS NAT mode


This model is implemented using the Iptables NAT table
After the user's request to the dispatcher, the requested packet is forwarded to the back-end RS via a preset iptables rule.
RS needs to set the gateway as the Distributor's intranet IP.
The data packets that are requested by the user and the packets returned to the user are all passed through the dispatcher, so the dispatcher becomes the bottleneck.
In NAT mode, only the Distributor has a public IP, so it is more economical to save public IP resources.
2) LVS IP tunnel mode

This mode requires a common IP configuration on the Distributor and all RS, which we call the VIP.
The target IP requested by the client is the VIP, and after the dispatcher receives the request packet, it will make a processing of the packet and change the target IP to RS IP so that the packet is on the RS.
After the RS receives the packet, it restores the original packet so that the target IP is the VIP, because the VIP is configured on all RS, so it will be considered as its own.
3) LVS Dr mode

This mode also requires a common IP configuration on the Distributor and all RS, which is the VIP.
Unlike IP tunnel, it modifies the MAC address of the packet to the MAC address of the RS.
After the RS receives the packet, it restores the original packet so that the target IP is the VIP, because the VIP is configured on all RS, so it will be considered as its own.

18.8 LVS Scheduling algorithm
Poll Round-robin RR
Weighted polling Weight Round-robin WRR
Minimum connection least-connection LC
Weighted minimum connection Weight least-connection WLC
------top four important----
The minimum connection based on locality locality-based Least Connections LBLC
Locally-based minimal connection with replication locality-based Least Connections with Replication LBLCR
Destination Address hash dispatch Destination Hashing DH
Source Address hash Dispatch source Hashing sh

18.9 Nat mode Build (UP) – Prepare for work
1) need to prepare two RS machines
Rs1 Intranet: 192.168.188.129 set gateway to dir for the Gateway 192.168.188.128
RS2 Intranet: 192.168.188.137 set gateway for gateway not dir 192.168.188.128
Clone host, edit config file, regain IP, view IP, modify hostname and create new connection in Xshell
Vi/etc/sysconfig/network-srcipts/ifcfg-ens33
ipaddr=192.168.188.137
Systemctl Restart Network.service
Dhclient
IP addr
Hostnamectl Set-hostname aming-03
Bash
2) Prepare a dispenser, also known as the Scheduler (dir)
Intranet: 192.168.188.128, Extranet: 192.168.142.147 (VMware only host mode)
Need to add a new network card, on the virtual machine--set--Network adapter--Add host-only mode network card-Modify the network segment.

Edit Nic Vi/etc/sysconfig/network-scripts/ifcfg-ens37
Systemctl Restart Network.service
Dhclient
IP addr

3) Shutdown firewall on all three machines
Systemctl Stop Firewalld
Systemctl Disable FIREWALLD
Install Iptables, start up and clear all the rules, then save the rules
Yum Install-y iptables-services
Systemctl start iptables
Iptables-f
Service Iptables Save
Turn off SELinux
Setenforce 0

18.10NAT mode Setup (bottom)
To implement the LVs tool Ipvsadm, you only need to install Ipvsadm on the main dir:128.
(1) Install Ipvsadm on dir:128 and write scripts
Yum Install-y Ipvsadm
vim/usr/local/sbin/lvs_nat.sh//content is as follows
////////////////////////////////////////////////
#! /bin/bash

Turn on routing forwarding on the director server

Echo 1 >/proc/sys/net/ipv4/ip_forward

Turn off redirection of ICMP

echo 0 >/proc/sys/net/ipv4/conf/all/send_redirects
echo 0 >/proc/sys/net/ipv4/conf/default/send_redirects

Note the name of the network card, Amin Two network cards are ENS33 and ENS37

echo 0 >/proc/sys/net/ipv4/conf/ens33/send_redirects
echo 0 >/proc/sys/net/ipv4/conf/ens37/send_redirects

Director Set NAT Firewall

Iptables-t nat-f
Iptables-t Nat-x
Iptables-t nat-a postrouting-s 192.168.188.0/24-j Masquerade

Director Setup Ipvsadm

Ipvsadm= '/usr/sbin/ipvsadm '
$IPVSADM-C
$IPVSADM-A-T 192.168.142.147:80-s rr# algorithm equalization, easy to see the difference
$IPVSADM-T 192.168.142.147:80-r 192.168.188.129:80-m-W 1
$IPVSADM-T 192.168.142.147:80-r 192.168.188.137:80-m-W 1
////////////////////////////////////////////////
sh/usr/local/sbin/lvs_nat.sh//Execute script, no error, indicates success.
IPVSADM-LN View Links

Nat Mode effect test
Installation of Nginx on both RS
Set two RS homepage, make a distinction, that is to say, directly curl two RS IP, get different results.
Vim/usr/share/nginx/html/index.html
In three client access Curl localhost, view and then use Curl 192.168.142.147 to access the extranet, multiple visits several times to see the difference in results
Alternately access two of 129 and 137 in turn.

18.11 Dr Mode Setup--Preparation work
Change the gateway for rs1=192.168.188.129 and rs2=192.168.188.137 to the default 192.168.188.2
Edit Nic Vi/etc/sysconfig/network-scripts/ifcfg-ens33
Restart Service Systemctl Restart Network.service
On the main 128 virtual one ip:192.168.188.200, do forwarding.
(1) Write script on dir:128 vim/usr/local/sbin/lvs_dr.sh//content as follows
//////////////////////////////////////
#! /bin/bash
echo 1 >/proc/sys/net/ipv4/ip_forward #打开端口映射
Ipv=/usr/sbin/ipvsadm
vip=192.168.188.200
rs1=192.168.188.129
rs2=192.168.188.137
#注意这里的网卡名字
Ifdown Ens33
Ifup Ens33
Ifconfig ens33:2 $VIP broadcast $VIP netmask 255.255.255.255 up
Route add-host $vip Dev ens33:2
$IPV-C
$IPV-A-t $VIP: 80-s WRR
$IPV-A-t $vip: 80-r $rs 1:80-g-W 1
$IPV-A-t $vip: 80-r $rs 2:80-g-W 1
//////////////////////////////////////
Sh/usr/local/sbin/lvs_dr.sh Run the script
(2) Two RS also write script vim/usr/local/sbin/lvs_rs.sh//content as follows
/////////////////////////
#/bin/bash
vip=192.168.188.200
#把vip绑定在lo上, is to implement RS directly return the results to the client
Ifdown Lo
Ifup Lo
Ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
Route Add-host $vip lo:0
#以下操作为更改arp内核参数 to enable RS to send the MAC address to the client successfully
#参考文档www. cnblogs.com/lgfeng/archive/2012/10/16/2726308.html
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
//////////////////////////////////////
Execute Script sh/usr/local/sbin/lvs_rs.sh
Route-n
IP addr

Test:
Iptables-t NAT-NVL
Iptables-t nat-f


Ipvsadm-ln

18.11 Keepalived+lvs DR
The full architecture requires two servers (dir) to install the Keepalived software separately for high availability, but the keepalived itself has load balancing functions, so this experiment can only install one keepalived, The keepalived has built-in IPVSADM functionality, so you don't need to install IPVSADM packages, and you don't have to write and execute that lvs_dir script.
The three machines were:
Dir (install keepalived) 188.128
Rs1 188.129
RS2 188.137
VIP 188.200

Edit Keepalived profile vim/etc/keepalived/keepalived.conf//content please go to https://coding.net/u/aminglinux/p/aminglinux-book/git/ Blob/master/d21z/lvs_keepalived.conf get
/////////////////
Vrrp_instance Vi_1 {
#备用服务器上为 BACKUP
State MASTER
#绑定vip的网卡为ens33, your network card and Amin may not be the same, here you need to change
Interface Ens33
VIRTUAL_ROUTER_ID 51
#备用服务器上为90
Priority 100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass Aminglinux
}
virtual_ipaddress {
192.168.188.200
}
}
Virtual_server 192.168.188.200 80 {
# (Query Realserver status every 10 seconds)
Delay_loop 10
# (LVS algorithm)
Lb_algo WLC
# (DR Mode)
Lb_kind DR
# (connection of the same IP is assigned to the same realserver within 60 seconds)
Persistence_timeout 0
# (check the Realserver status with the TCP protocol)
Protocol TCP
Real_server 192.168.188.129 80 {
# (weight)
Weight 100
Tcp_check {
# (10 seconds No response timeout)
Connect_timeout 10
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
Real_server 192.168.188.137 80 {
Weight 100
Tcp_check {
Connect_timeout 10
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
}
////////////////
Need to change the IP information inside
Execute ipvsadm-c to empty the previous IPVSADM rules without a virtual Ip200
Systemctl Start keepalived startup keepalived
IP addr again with virtual IP 200
Systemctl Restart Network can erase the previous VIP
The/usr/local/sbin/lvs_rs.sh script is still executed on both RS
Summarize:
The keepalived has a good feature that can stop the request from being forwarded to a single RS outage

18th Linux Cluster architecture

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.