MySQL primary master replication +lvs+keepalived for MySQL high availability

Source: Internet
Author: User
Tags openssl server array

MySQL replication can ensure the redundancy of the data and read and write separation to share the system pressure, if the primary master replication can also be good to avoid the primary node single point of failure. But there are some problems with MySQL master replication that do not meet our real needs: A unified access Portal is not provided for load balancing, and if Master is down, it needs to be manually switched to another master instead of automatically switching.

This article describes how to achieve high availability of MySQL by lvs+keepalived, while solving the above problems.

Introduction to Keepalived and LVS

Keepalived is a software solution based on VRRP (Virtual Routing Redundancy Protocol) that can be used to achieve high availability of services to avoid a single point of failure. Keepalived is generally used for lightweight high availability, and does not require shared storage, typically used between two nodes, common with lvs+keepalived, nginx+keepalived combinations.

LVS (Linux virtual Server) is a highly available virtual server cluster system. Founded in May 1998 by Dr. Zhangwensong, this project is one of the earliest free software projects in China.

LVS is mainly used for multi-server load balancing for the network layer. In a server cluster system built by LVS, the load balancing layer of the front end is called the director server, and the server group layer that serves the backend is called real server. Get an overview of the LVS infrastructure.

LVS has three modes of operation, namely Dr (direct Routing), TUN (tunneling IP Tunneling), NAT (network address translation). Where the Tun mode can support more real servers, but requires all server support IP Tunneling Protocol, Dr can also support the equivalent of real server, but need to ensure that the Director Server virtual network card and physical network card in the same network segment; Nat extensibility is limited, It is not possible to support more real servers because all request packages and reply packets require the Director Server to parse and regenerate, affecting efficiency. At the same time, LVS load balancer has 10 scheduling algorithms, namely RR, WRR, LC, WLC, LBLC, LBLCR, dh, sh, sed, NQ

For detailed LVS instructions, see Portal

In this paper, we will use LVS to realize MySQL read-write load balancing, keepalived Avoid single point of failure of nodes.

Lvs+keepalived Configuring the Environment preparation

lvs1:192.168.1.2

lvs2:192.168.1.11

MySQL server1:192.168.1.5

MySQL server2:192.168.1.6

vip:192.168.1.100

Os:centos 6.4

KeepAlive Installation

Keepalived

The following packages need to be installed

# yum install-y kernel-devel OpenSSL openssl-devel

Unzip keepalived to/usr/local/and go to directory to perform configuration compilation

#./configure--prefix=/usr/local/keepalived--with-kernel-dir=/usr/src/kernels/2.6.32-431.5.1.el6.x86_64/ keepalived configuration------------------------keepalived version       : 1.2.13Compiler                 : Gcccompiler Flags           :-g-o2extra Lib                :-lssl-lcrypto-lcrypt use IPVS Framework       : Yesipvs sync daemon Support:yesipvs use Lib NL           : Nofwmark socket support    : Yesuse VRRP Framework       : Yesuse VRRP VMAC            : YESSNMP support             : NoSHA1 sup Port             : Nouse Debug flags          : no# make && make install

By default, Keepalived starts by going to the/etc/keepalived directory to find the configuration file, copying the desired profile to the specified location

# cp/usr/local/keepalived/etc/rc.d/init.d/keepalived/etc/rc.d/init.d/# cp/usr/local/keepalived/etc/sysconfig/ keepalived/etc/sysconfig/# cp/usr/local/keepalived/etc/keepalived/keepalived.conf/etc/keepalived/# cp/usr/local/ keepalived/sbin/keepalived/usr/sbin/# chkconfig mysqld on# chkconfig keepalived on
LVS Installation

Ipvsadm

The following packages need to be installed

# yum Install-y libnl* popt*

To see if the LVS module is loaded

# modprobe-l |grep Ipvs

Unzip the installation

# ln-s/usr/src/kernels/2.6.32-431.5.1.el6.x86_64//usr/src/linux# tar-zxvf ipvsadm-1.26.tar.gz# make && make I Nstall

LVS installation Complete, view current LVS cluster

# ipvsadm-l-nip Virtual Server version 1.2.1 (size=4096) Prot localaddress:port Scheduler flags-> remoteaddress:port
   forward Weight Activeconn Inactconn
lvs+keepalived configuration to build MySQL master master replication

Do not repeat here, please refer to MySQL replication

Configure keepalived

The following is the keepalived configuration on the LVS1 node (keepalived master node), LVS2 similar

# vim/etc/keepalived/keepalived.conf! The Configuration File for keepalivedglobal_defs {router_id lvs1}vrrp_instance vi_1 {The state MASTER #指定instance初始状态, actually based on the optimal The first-level decision. The backup node is different interface eth0 #虚拟IP所在网 virtual_router_id #VRID, the same vrid as a group, the decision to multicast MAC address priority #优先级, another set to 9 0.backup nodes are different Advert_int 1 #检查间隔 authentication {Auth_type Pass #认证方式, either pass or ha auth_pass 1111 #认 Certificate Password} virtual_ipaddress {192.168.1.100 #VIP}}virtual_server 192.168.1.100 3306 {delay_loop 6 #服务轮询 Time interval Lb_algo wrr #加权轮询调度, LVS scheduling algorithm rr|wrr|lc|wlc|lblc|sh|sh lb_kind DR #LVS集群模式 nat| dr|      TUN, wherein the DR mode requires that the Load Balancer network card must have a piece with the physical network card in the same segment #nat_mask 255.255.255.0 persistence_timeout #会话保持时间 Protocol TCP #健康检查协议            # # Real server settings, 3306 is the MySQL connection port real_server 192.168.1.5 3306 {weight 3 # #权重 Tcp_check {    Connect_timeout 3 nb_get_retry 3 delay_before_retry 3 Connect_port 3306}} real_sErver 192.168.1.6 3306 {weight 3 Tcp_check {connect_timeout 3 nb_get_retry 3 Delay_before_retry 3 Connect_port 3306}}
Configuring LVS

Writing the LVs startup script/etc/init.d/realserver

#!/bin/shvip=192.168.1.100. /etc/rc.d/init.d/functionscase "$" in# disable local ARP request, bind the ground loopback address start)/sbin/ifconfig Lo down/sbin/ifconfig lo up Ech O "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/p Roc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce/sbin/sysctl-p >/dev/ Null 2>&1/sbin/ifconfig lo:0 $VIP netmask 255.255.255.255 up #在回环地址上绑定VIP, sets the mask to maintain communication with the IP on the direct Server (itself)/s Bin/route add-host $VIP Dev lo:0 echo "LVS-DR Real server starts successfully.\n";; Stop)/sbin/ifconfig lo:0 down/sbin/route del $VIP >/dev/null 2>&1 echo "1" >/proc/sys/net/ipv4/con F/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ig Nore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announceecho "LVS-DR Real server stopped.\n";; Status) isloon= '/sbin/ifconfig lo:0 | Grep"$VIP" ' isroon= '/bin/netstat-rn | grep "$VIP" ' If [' $isLoON ' = = ' "-a" $isRoOn "= =" "];    Then echo "LVS-DR Real server had run yet."    else echo "LVS-DR Real server is running." fi exit 3;; *) echo "Usage: $ {start|stop|status}" exit 1esacexit 0

Adding LVS scripts to boot

# chmod +x/etc/init.d/realserver# echo "/etc/init.d/realserver" >>/etc/rc.d/rc.local

Start LVs and keepalived separately

# service Realserver start# service keepalived start

Note that at this time the network card changes, you can see the virtual network card has been assigned to the Realserver.

At this time to view the LVS cluster status, you can see the cluster has two real servers, scheduling algorithms, weights and other information. Activeconn represents active connections for the current real server

# Ipvsadm-lnip Virtual Server version 1.2.1 (size=4096) Prot localaddress:port Scheduler Flags-  Remoteaddress:po RT           Forward Weight activeconn inactconntcp  192.168.1.100:3306 WRR Persistent             Route   3      4          1-           192.168.1.6:3306             Route   3      0          2

The Lvs+keepalived+mysql master replication is now complete.

Test Validation Functional validation

Close MySQL Server2

# Service Mysqld Stop

In the LVS1 view/var/log/messages about keepalived log, LVS1 detected MySQL Server2 down, while the LVS cluster automatically rejected the fault node

Sep  9 13:50:53 192.168.1.2 keepalived_healthcheckers[18797]: TCP connection to [192.168.1.6]:3306 failed!!! Sep  9 13:50:53 192.168.1.2 keepalived_healthcheckers[18797]: removing service [192.168.1.6]:3306 from VS [ 192.168.1.100]:3306

Automatically join the failed node to the LVS cluster automatically after starting MySQL Server2 from new

Sep  9 13:51:41 192.168.1.2 keepalived_healthcheckers[18797]: TCP connection to [192.168.1.6]:3306 success. Sep  9 13:51:41 192.168.1.2 keepalived_healthcheckers[18797]: Adding service [192.168.1.6]:3306 to VS [192.168.1.100 ]:3306

Turn off keepalived on LVS1 (simulate down operation), view the logs on LVS1, and see keepalived Remove VIP on LVS1

Sep  9 14:01:27 192.168.1.2 keepalived[18796]: Stopping keepalived v1.2.13 (09/09,2014) Sep  9 14:01:27 192.168.1.2 keepalived_healthcheckers[18797]: removing service [192.168.1.5]:3306 from VS [192.168.1.100]:3306sep  9 14:01:27 192.168.1.2 keepalived_healthcheckers[18797]: removing service [192.168.1.6]:3306 from VS [ 192.168.1.100]:3306sep  9 14:01:27 192.168.1.2 keepalived_vrrp[18799]: vrrp_instance (vi_1) sending 0 PrioritySep  9 14:01:27 192.168.1.2 keepalived_vrrp[18799]: vrrp_instance (vi_1) removing protocol VIPs.

While viewing the logs on the LVS2, you can see that LVS2 became master and took over the VIP

Sep  9 14:11:24 192.168.1.11 keepalived_vrrp[7457]: vrrp_instance (vi_1) Transition to MASTER statesep  9 14:11:25 192.168.1.11 keepalived_vrrp[7457]: vrrp_instance (vi_1) Entering MASTER statesep  9 14:11:25 192.168.1.11 KEEPALIVED_VRRP[7457]: vrrp_instance (vi_1) setting protocol Vips.sep  9 14:11:25 192.168.1.11 keepalived_vrrp[7457 ]: Vrrp_instance (vi_1) sending gratuitous ARPs on eth0 for 192.168.1.100Sep  9 14:11:25 192.168.1.11 keepalived_health CHECKERS[7456]: NetLink Reflector reports IP 192.168.1.100 addedsep  9 14:11:25 192.168.1.11 avahi-daemon[1407]: Registering new address record for 192.168.1.100 on eth0. Ipv4.sep  9 14:11:30 192.168.1.11 keepalived_vrrp[7457]: vrrp_instance (vi_1) sending gratuitous ARPs on eth0 for 192.1 68.1.100

Check the LVS cluster status on LVS2, everything is OK.

# Ipvsadm-lnip Virtual Server version 1.2.1 (size=4096) Prot localaddress:port Scheduler Flags-  Remoteaddress:po RT           Forward Weight activeconn inactconntcp  192.168.1.100:3306 WRR Persistent             Route   3      2          0-           192.168.1.6:3306             Route   3      1          0
Summarize
    • MySQL primary master replication is the base of the cluster, which makes up the server Array, where each node acts as a real server.
    • The LVS server provides load balancing, distributing user requests to real server, and a real server failure does not affect the entire cluster.
    • Keepalived build the main spare LVS server, avoid the single point of failure of LVS server, can automatically switch to the normal node when failure occurs.

MySQL primary master replication +lvs+keepalived for MySQL high availability

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.