MySQL replication can ensure the redundancy of the data and read and write separation to share the system pressure, if the primary master replication can also be good to avoid the primary node single point of failure. But there are some problems with MySQL master replication that do not meet our real needs: A unified access Portal is not provided for load balancing, and if Master is down, it needs to be manually switched to another master instead of automatically switching.
This article describes how to achieve high availability of MySQL by lvs+keepalived, while solving the above problems.
Introduction to Keepalived and LVS
Keepalived is a software solution based on VRRP (Virtual Routing Redundancy Protocol) that can be used to achieve high availability of services to avoid a single point of failure. Keepalived is generally used for lightweight high availability, and does not require shared storage, typically used between two nodes, common with lvs+keepalived, nginx+keepalived combinations.
LVS (Linux virtual Server) is a highly available virtual server cluster system. Founded in May 1998 by Dr. Zhangwensong, this project is one of the earliest free software projects in China.
LVS is mainly used for multi-server load balancing for the network layer. In a server cluster system built by LVS, the load balancing layer of the front end is called the director server, and the server group layer that serves the backend is called real server. Get an overview of the LVS infrastructure.
650) this.width=650; "Src=" http://note.youdao.com/yws/public/resource/56476cf8ddaba9722c02c8be30d7c4b6/ Ab58dd366ed440cdbd4e329120de17b0 "style=" Margin-top:8px;height:auto; "alt=" ab58dd366ed440cdbd4e329120de17b0 "/ >
LVS has three modes of operation, namely Dr (direct Routing), TUN (tunneling IP Tunneling), NAT (network address translation). Where the Tun mode can support more real servers, but requires all server support IP Tunneling Protocol, Dr can also support the equivalent of real server, but need to ensure that the Director Server virtual network card and physical network card in the same network segment; Nat extensibility is limited, It is not possible to support more real servers because all request packages and reply packets require the Director Server to parse and regenerate, affecting efficiency. At the same time, LVS load balancer has 10 scheduling algorithms, namely RR, WRR, LC, WLC, LBLC, LBLCR, dh, sh, sed, NQ
For detailed LVS instructions, see Portal
In this paper, we will use LVS to realize MySQL read-write load balancing, keepalived Avoid single point of failure of nodes.
Lvs+keepalived Configuring the Environment preparation
lvs1:192.168.1.2
lvs2:192.168.1.11
MySQL server1:192.168.1.5
MySQL server2:192.168.1.6
vip:192.168.1.100
Os:centos 6.4
650) this.width=650; "Src=" http://note.youdao.com/yws/public/resource/56476cf8ddaba9722c02c8be30d7c4b6/ 1ce90e1dc20745568034f4677d447969 "style=" Margin-top:8px;height:auto; "alt=" 1ce90e1dc20745568034f4677d447969 "/ >
KeepAlive Installation
Keepalived
The following packages need to be installed
1 |
# yum install-y kernel-devel OpenSSL openssl-devel |
Unzip keepalived to/usr/local/and go to directory to perform configuration compilation
By default, Keepalived starts by going to the/etc/keepalived directory to find the configuration file, copying the desired profile to the specified location
1 2 3 4 5 6 |
# cp/usr/local/keepalived/etc/rc.d/init.d/keepalived/etc/rc.d/init.d/ # cp/usr/local/keepalived/etc/sysconfig/keepalived/etc/sysconfig/ # cp/usr/local/keepalived/etc/keepalived/keepalived.conf/etc/keepalived/ # cp/usr/local/keepalived/sbin/keepalived/usr/sbin/ # Chkconfig Mysqld on # Chkconfig keepalived on |
LVS Installation
Ipvsadm
The following packages need to be installed
1 |
# yum Install-y libnl* popt* |
To see if the LVS module is loaded
1 |
# modprobe-l |grep Ipvs |
Unzip the installation
1 2 3 |
# ln-s/usr/src/kernels/2.6.32-431.5.1.el6.x86_64//usr/src/linux # TAR-ZXVF Ipvsadm-1.26.tar.gz # make |
LVS installation Complete, view current LVS cluster
1 2 3 4 |
# IPVSADM-L-N IP Virtual Server version 1.2.1 (size=4096) Prot Localaddress:port Scheduler Flags Remoteaddress:port Forward Weight activeconn inactconn |
lvs+keepalived Configuration
To build a master copy of MySQL
Do not repeat here, please refer to MySQL replication
Configure keepalived
The following is the keepalived configuration on the LVS1 node (keepalived master node), LVS2 similar
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21st 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
# vim/etc/keepalived/keepalived.conf ! Configuration File for Keepalived Global_defs { router_id LVS1 } Vrrp_instance Vi_1 { State MASTER #指定instance初始状态, which is actually determined by priority. Backup node is different Interface Eth0 #虚拟IP所在网 virtual_router_id #VRID, the same vrid as a group that determines the multicast MAC address Priority #优先级, the other is changed to 90.backup node is not the same Advert_int 1 #检查间隔 Authentication { Auth_type Pass #认证方式, can be pass or ha Auth_pass 1111 #认证密码 } virtual_ipaddress { 192.168.1.100 #VIP } } Virtual_server 192.168.1.100 3306 { Delay_loop 6 #服务轮询的时间间隔 Lb_algo wrr #加权轮询调度, LVS scheduling algorithm Rr|wrr|lc|wlc|lblc|sh|sh Lb_kind DR #LVS集群模式 nat| dr| TUN, where the DR mode requires that the load Balancer NIC must have a piece of the same network segment as the physical NIC #nat_mask 255.255.255.0 Persistence_timeout #会话保持时间 Protocol TCP #健康检查协议 # # Real server settings, 3306 is the MySQL connection port Real_server 192.168.1.5 3306 { Weight 3 # #权重 Tcp_check { Connect_timeout 3 Nb_get_retry 3 Delay_before_retry 3 Connect_port 3306 } } Real_server 192.168.1.6 3306 { Weight 3 Tcp_check { Connect_timeout 3 Nb_get_retry 3 Delay_before_retry 3 Connect_port 3306 } } } |
Configuring LVS
Writing the LVs startup script/etc/init.d/realserver
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21st 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
#!/bin/sh vip=192.168.1.100 . /etc/rc.d/init.d/functions Case "$" in # Disable local ARP requests, bind the ground loopback address Start /sbin/ifconfig Lo Down /sbin/ifconfig lo Up echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce /sbin/sysctl-p >/dev/null 2>&1 /sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up #在回环地址上绑定VIP, sets the mask to maintain communication with the IP on the direct Server (itself) /sbin/route add-host $VIP Dev lo:0 echo "LVS-DR Real server starts SUCCESSFULLY.N" ;; Stop /sbin/ifconfig lo:0 Down /sbin/route del $VIP >/dev/null 2>&1 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce echo "LVS-DR Real Server STOPPED.N" ;; Status isloon= '/sbin/ifconfig lo:0 | grep "$VIP" ' Isroon= '/bin/netstat-rn | grep "$VIP" ' If ["$isLoON" = = ""-a "$isRoOn" = = ""]; Then echo "LVS-DR Real server has run yet." Else echo "LVS-DR Real server is running." Fi Exit 3 ;; *) echo "Usage: $ {Start|stop|status}" Exit 1 Esac Exit 0 |
Adding LVS scripts to boot
1 2 |
# chmod +x/etc/init.d/realserver # echo "/etc/init.d/realserver" >>/etc/rc.d/rc.local |
Start LVs and keepalived separately
1 2 |
# service Realserver Start # service Keepalived Start |
Note that at this time the network card changes, you can see the virtual network card has been assigned to the Realserver.
At this time to view the LVS cluster status, you can see the cluster has two real servers, scheduling algorithms, weights and other information. Activeconn represents active connections for the current real server
1 2 3 4 5 6 7 |
# IPVSADM-LN IP Virtual Server version 1.2.1 (size=4096) Prot Localaddress:port Scheduler Flags Remoteaddress:port Forward Weight activeconn inactconn TCP 192.168.1.100:3306 WRR Persistent 50 -192.168.1.5:3306 Route 3 4 1 -192.168.1.6:3306 Route 3 0 2 |
The Lvs+keepalived+mysql master replication is now complete.
Test validation
Functional verification
Close MySQL Server2
In the LVS1 view/var/log/messages about keepalived log, LVS1 detected MySQL Server2 down, while the LVS cluster automatically rejected the fault node
1 2 |
Sep 9 13:50:53 192.168.1.2 keepalived_healthcheckers[18797]: TCP connection to [192.168.1.6]:3306 failed!!! Sep 9 13:50:53 192.168.1.2 keepalived_healthcheckers[18797]: removing service [192.168.1.6]:3306 from VS [192.168.1.100] : 3306 |
Automatically join the failed node to the LVS cluster automatically after starting MySQL Server2 from new
1 2 |
Sep 9 13:51:41 192.168.1.2 keepalived_healthcheckers[18797]: TCP connection to [192.168.1.6]:3306 success. Sep 9 13:51:41 192.168.1.2 keepalived_healthcheckers[18797]: Adding service [192.168.1.6]:3306 to VS [192.168.1.100] : 3306 |
Turn off keepalived on LVS1 (simulate down operation), view the logs on LVS1, and see keepalived Remove VIP on LVS1
1 2 3 4 5 |
Sep 9 14:01:27 192.168.1.2 keepalived[18796]: Stopping keepalived v1.2.13 (09/09,2014) Sep 9 14:01:27 192.168.1.2 keepalived_healthcheckers[18797]: removing service [192.168.1.5]:3306 from VS [192.168.1.100] : 3306 Sep 9 14:01:27 192.168.1.2 keepalived_healthcheckers[18797]: removing service [192.168.1.6]:3306 from VS [192.168.1.100] : 3306 Sep 9 14:01:27 192.168.1.2 keepalived_vrrp[18799]: vrrp_instance (vi_1) Sending 0 priority Sep 9 14:01:27 192.168.1.2 keepalived_vrrp[18799]: vrrp_instance (vi_1) removing protocol VIPs. |
While viewing the logs on the LVS2, you can see that LVS2 became master and took over the VIP
1 2 3 4 5 6 7 |
Sep 9 14:11:24 192.168.1.11 keepalived_vrrp[7457]: vrrp_instance (vi_1) Transition to MASTER State Sep 9 14:11:25 192.168.1.11 keepalived_vrrp[7457]: vrrp_instance (vi_1) Entering MASTER S TATE Sep 9 14:11:25 192.168.1.11 keepalived_vrrp[7457]: vrrp_instance (vi_1) setting protocol VIPs. Sep 9 14:11:25 192.168.1.11 keepalived_vrrp[7457]: vrrp_instance (vi_1) sending gratuitous ARPs on eth0 for 192.168.1.10 0 Sep 9 14:11:25 192.168.1.11 keepalived_healthcheckers[7456]: NetLink Reflector reports IP 192.168.1.100 added Sep 9 14:11:25 192.168.1.11 avahi-daemon[1407]: Registering new address record for 192.168.1.100 on eth0. IPv4. Sep 9 14:11:30 192.168.1.11 keepalived_vrrp[7457]: vrrp_instance (vi_1) sending gratuitous ARPs on eth0 for 192.168.1.10 0 |
Check the LVS cluster status on LVS2, everything is OK.
1 2 3 4 5 6 7 |
# IPVSADM-LN IP Virtual Server version 1.2.1 (size=4096) Prot Localaddress:port Scheduler Flags Remoteaddress:port Forward Weight activeconn inactconn TCP 192.168.1.100:3306 WRR Persistent 50 -192.168.1.5:3306 Route 3 2 0 -192.168.1.6:3306 Route 3 1 0 |
Summarize
MySQL primary master replication is the base of the cluster, which makes up the server Array, where each node acts as a real server.
The LVS server provides load balancing, distributing user requests to real server, and a real server failure does not affect the entire cluster.
Keepalived build the main spare LVS server, avoid the single point of failure of LVS server, can automatically switch to the normal node when failure occurs.
This article is from the "Ops Boy" blog, please make sure to keep this source http://kaliroot.blog.51cto.com/8763915/1874654
MySQL primary master replication + LVS + keepalived for MySQL high availability