First, memcached cluster
- Memcached consists of memcached server and Memcache client, in which the distributed cache effect must be implemented by the client, but its distribution is a pseudo-cluster, memcached the nodes do not communicate, no data backup, load Balancing function is implemented by the client
- Memcached itself is a memory-based cache, the design itself does not have a redundancy mechanism, if a memcached node loses all the data (such as power outage, restart, etc.), in theory, the backend application can retrieve the data from the database again, but when the traffic is large, it will greatly aggravate the burden of the database, By adding more nodes to reduce the impact of losing one node, the hot standby node takes over the VIP while the other node is down, but multiple nodes cannot synchronize the data, which can cause a single point of failure.
Second, the way to implement memcached cache cluster 1.Repcached
- repcached (Replication cached, high-availability technology, referred to as replication buffer technology), repcached the main advantage is data redundancy, both memcached can read and write operations, but only support a single master, single slave Solution, So the limitations are very large; repcached must be consistent with the memcached version in the process (you can also download the integrated package memcached with repcached in the package)
Note: Although the repcached solution is limited in its own use, the general environment will be used in conjunction with the rest of the software (for example, repcached+magent+monit:repcached is responsible for synchronous backup from single master, magent agent implements N master N, Monit Monitor Each instance port of the above components to ensure automatic failure restart)
- (The Japanese invented memcached's high-availability technology, referred to as replication buffer technology.) Single master single-slave scheme, but master/slave are both readable and writable, and can be synchronized with each other. If master goes down, slave detects that the connection is broken, it automatically listen and becomes master, and waits for a new slave node to join. If the original dead master recovery, only manually to start from the node, the original master node can not be preempted to become the new master node, unless the new master node (i.e., slave) hangs, which means that the repcached implementation of memcached, the master-slave, Preemption is not available for master nodes. Assuming that the master and slave nodes are all hung up, the data is lost! Therefore, this is a short board of the repcached, but can be combined with other tools to compensate for this disadvantage (such as keepalived). And if the slave is broken, master will also detect the connection disconnects, will re-listen wait for the new slave to join)
2.Magent
- Magent writes the data to the main memcached and from the memcached, and writes the same algorithm to the master-slave memcached.
- When the main memcached is redundant, the magent will read the data from the memcached
- When the primary memcached is restored, Magent will re-read the data to the primary memcached, where there is no data due to the primary memcached just recovering, which can cause some data to be unreadable, which is also a major drawback of magent
Note: Magent is used to implement a multi-master multi-slave structure, but the data is not synchronized between nodes, and the slave node is used only for backup, and the request does not request data from the slave node (except for primary node corruption)
- In the production environment, the likelihood of the primary memcached being down is very small, most of the time is working, and the memcached is only used after the main memcached is down, so the space allocated from memcached is unlikely to be the same as the main memcached. This is undoubtedly a waste of precious memory space
- Since the allocation of space from the memcached is small, and as the data deposited will be more and more, will cause the cached data is constantly out of memory, so after the main memcached outage, from memcached only temporarily to alleviate the role of database pressure
- After the main memcached is down, it is not appropriate to start it directly, should be started when the database pressure is small, when the warm-up cache (a large number of simultaneous requests will cause node congestion)
- You can also deploy two magent nodes to achieve load balancing of the memcached ingress, which means that the read and write requests are allocated to two magent portals according to a certain algorithm, one processing the read request, the other dedicated processing write request, can achieve high availability, but also can load balance
- By Magent cache proxy, to prevent a single point phenomenon, connect to the cache Proxy Server (magent) through the client (Memcache), cache Proxy (magent) connection cache server (Memcached), The cache proxy Server (magent) can connect to multiple memcached, but if the cache proxy Server (magent) fails, the cache proxy Server (magent) will not be able to continue serving, so it is highly available with keepalived software
Note: Memcache has previously said that the general and the Web deployment, and Magent general and memcached deployment together, about Memcache and Magent call cooperation, by the developer through the Java call to write (or self-study Java configuration)
Three, magent realization plan
Magent
- M1 (Memcached), M2 (Memcached) to do backup, at this time using MA (magent) Dispatch two Memcached, you can achieve when M1 down, MA can get data from M2, no impact on users
- Disadvantages:
- M1 can get data from M2 when it is down, but when M1 recovers, the MA cannot get data from it (M1 because of a failure, and then start the cache data loss). While there are N hosts, the amount of memcached data loss is only 1/n, but it is inefficient to get data from M2 M1 downtime
- M1 cannot sync from M2 to data during restart
Magent+repcached
- First, load sharing (read, write, or poll) through Ma-user (Memcache) to MA-1 (magent), MA-2 (magent), using multiple ma-n (magent) to efficiently utilize the system's hardware resources to quickly respond to requests for data access
- Secondly, single point of failure recovery is achieved by MA-1 to M1-s (Memcached), m1-b (Memcached) layer.
- Specifically, when the m1-s is down, M1-b will automatically become the host, so that there are two benefits when the M1-s restart recovery. First: For Ma-user, the logical order of ma-1/ma-2 has not changed, which is good for systems that exist ma-mgr, because Ma-user uses a simple hash remainder algorithm when assigning keys to ma-n. Second: For MA-1, because MA-1 at the time of initialization has already specified the primary and standby relationship, the use of the process will not modify the MA-1 settings, so no matter how the main and standby relationships in MA-1, will be shielded at the ma-n level, will not affect the subsequent related extensions and porting
Case: repcached+magent+monit+memcached
Case: magent+memcached Cluster
Host |
system |
IP |
Network card |
Software |
Magent_1 |
Centos 6.7 64Bit |
192.168.1.10 |
Vmnet1 (bridge Connection) |
Magent, keepalived |
Magent_2 |
Centos 6.7 64Bit |
192.168.1.20 |
Vmnet1 (bridge Connection) |
Magent, keepalived |
Memcached_1 |
Centos 6.7 64Bit |
192.168.1.100 |
Vmnet1 |
memcached, Libevent |
Memcached_2 |
Centos 6.7 64Bit |
192.168.1.200 |
Vmnet1 |
memcached, Libevent |
Test machine |
Centos 6.7 64Bit |
192.168.1.111 |
Vmnet1 |
Telnet |
Memcached_1 (Server)
1. Environmental preparedness (MEMCACHED_1)
vim /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0 //网卡名称TYPE=Ethernet //网卡类型为以太网ONBOOT=yes //开机自启该网卡NM_CONTROLLED=no //关闭NetworkManagerBOOTPROTO=static //网卡设置为静态方式IPADDR=192.168.1.100 //IP地址配置NETMASK=255.255.255.0 //子网掩码配置
/etc/init.d/network restart //重启网络服务
2. Installing Memcached (memcached_1)
Installing Libevent
Libevent is the asynchronous time notification library that memcached relies on to complete the installation as a memcached dependency
tar -zxvf libevent-1.4.9-stable.tar.gz -C /usr/src/
cd /usr/src/libevent-1.4.9-stable/
./configure --prefix=/usr/local/libevent
make && make install
Installing memcached
tar -zxvf memcached-1.2.6.tar.gz -C /usr/src/
cd /usr/src/memcached-1.2.6/
./configure --prefix=/usr/local/memcached --with-libevent=/usr/local/libevent
Options:
--with-libevent: Specify Libevent Event Library location
make && make install
echo "PATH=$PATH:/usr/local/memcached/bin">>/etc/profile
source /etc/profile
vim /etc/ld.so.conf //打开系统额外加载库定义文件/usr/local/libevent/lib //增加libevent事件库文件夹路径
ldconfig //重新读取/etc/ld.so.conf文件内容
3. Start the Memcached service (memcached_1)
memcached -d -m 1 -u root -l 192.168.1.100 -p 11211
netstat -utpln |grep 11211
Memcached_2 (Server)
1. Environmental preparedness (MEMCACHED_2)
vim /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0 //网卡名称TYPE=Ethernet //网卡类型为以太网ONBOOT=yes //开机自启该网卡NM_CONTROLLED=no //关闭NetworkManagerBOOTPROTO=static //网卡设置为静态方式IPADDR=192.168.1.200 //IP地址配置 NETMASK=255.255.255.0 //子网掩码配置
/etc/init.d/network restart //重启网络服务
2. Installing Memcached (memcached_2)
Installing Libevent
Libevent is the asynchronous time notification library that memcached relies on to complete the installation as a memcached dependency
tar -zxvf libevent-1.4.9-stable.tar.gz -C /usr/src/
cd /usr/src/libevent-1.4.9-stable/
./configure --prefix=/usr/local/libevent
make && make install
Installing memcached
tar -zxvf memcached-1.2.6.tar.gz -C /usr/src/
cd /usr/src/memcached-1.2.6/
./configure --prefix=/usr/local/memcached --with-libevent=/usr/local/libevent
Options:
--with-libevent: Specify Libevent Event Library location
make && make install
echo "PATH=$PATH:/usr/local/memcached/bin">>/etc/profile
source /etc/profile
vim /etc/ld.so.conf //打开系统额外加载库定义文件/usr/local/libevent/lib //增加libevent事件库文件夹路径
ldconfig //重新读取/etc/ld.so.conf文件内容
3. Start the Memcached service (memcached_2)
memcached -d -m 1 -u root -l 192.168.1.200 -p 11211
netstat -utpln |grep 11211
Magent_1 (Client)
1. Environmental preparedness (MAGENT_1)
vim /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0 //网卡名称TYPE=Ethernet //网卡类型为以太网ONBOOT=yes //开机自启该网卡NM_CONTROLLED=no //关闭NetworkManagerBOOTPROTO=static //网卡设置为静态方式IPADDR=192.168.1.10 //IP地址配置NETMASK=255.255.255.0 //子网掩码配置
cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1 //网卡名称TYPE=Ethernet //网卡类型为以太网ONBOOT=yes //开机自启该网卡NM_CONTROLLED=no //关闭NetworkManagerBOOTPROTO=dhcp //网卡设置为动态方式
/etc/init.d/network restart //重启网络服务
2. Dependent installation (magent_1)
yum clean all && yum repolist //清除YUM缓存并重新生成
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repoyum -y install libevent-devel
3. Installing Magent (MAGENT1)
mkdir /usr/src/magent-0.5 //创建源码解压目录
tar -zxvf magent-0.5.tar.gz -C /usr/src/magent-0.5
cd /usr/src/magent-0.5
ls -l /usr/src/magent-0.5
Note: magent software is the source code program, the source code in the default has been executed. Configure and generate a makefile file, the user only need to modify the header and makefile files as needed, and then make directly
Error One: did not find to event.h
Because the compiled compilation libevent specifies a custom installation path, the software generally looks for the. h header file Search is in the header file directory of/usr/include or/usr/local/include, So you can't find a custom event.h header file, just event.h the custom path or copy it to/usr/include or/usr/local/include's header file directory.
scp [email protected]:/usr/local/libevent/include/* /usr/local/include/
Error Two: did not find to Event-config.h, evutil.h
Cause of error and solution as above, no longer elaborated
scp [email protected]:/usr/local/libevent/include/* /usr/local/include/
Error Three: Ssize_max function not declared
This is because the Ssize_max constant needs to be called in the program, and the constant is not defined in the file, so define the constant in the declaration file and set the value to
vim ketama.h#ifndef SSIZE_MAX //测试SSIZE_MAX是否被宏定义过#define SSIZE_MAX 32767 //如果没有被宏定义过,定义并编译#endif //如已经定义,则忽视define内容,继续执行文件剩余内容 ... //上面三行在文件开头增加
Note: When editing a file, there will be two Ketama files with the same name, but with different suffixes, ketama.c generally stores the implementation of specific functions, Ketama.h is called the header file, the definition of the general storage type, the declaration of functions, etc. (of course, the nature of the two files is not different, just to facilitate the developer identification), so should be edited ketama.h, do not edit the wrong
Error four: undefined undefined function
Floor is a function provided in the math library, the default GCC does not automatically link the math library (math libraries), so you need to manually specify the-LM in the makefile file, compile the program using the math library functions
vim Makefile LIBS = -levent -lm //在原选项后增加新参数
make //编译成功后,在源码目录生成可执行文件
cp magent /usr/bin //将可执行程序拷贝到PATH搜索路径下
Installing Keepalived (magent_1)
yum -y install keepalived //YUM安装Keepalived软件包
vim /etc/keepalived/keepalived.confglobal_defs {//配置故障发生时的通知对象以及机器标识router_id MASTER //标识本节点的字条串,通常为hostname}vrrp_script magent {//健康检查(防止脑裂)script "/opt/magent.sh"//检查执行的脚本或命令,这里主要判断keepalived状态,正常则启动Magentinterval 1//脚本运行间隔weight -10//当检测脚本执行失败,priority的优先级会减少10个点;不设置从节点也可抢占}vrrp_instance VI_1 {//定义对外提供服务的VIP区域及其相关属性state MASTER//可以是MASTER或BACKUP,不过当其他节点keepalived启动时会将priority比较大的节点选举为MASTER,因此该项其实没有实质用途interface eth0//节点固有IP的网卡,用来发VRRP包virtual_router_id 51//取值在0-255之间,用于区分多个instance(节点)的VRRP组播priority 100//选举master,要成为master,那么这个选项的值最好高于其他机器50个点,该项取值范围是1-255(在此范围之外会被识别成默认值100)advert_int 1//发VRRP包的时间间隔,即多久进行一次master选举(可以认为是健康查检时间间隔)authentication {//认证区域,认证类型有PASS和HA(IPSEC),推荐使用PASS(密码只识别前8位)auth_type PASS//指定认证类型为PASSauth_pass 1111//指定认证密码,多个节点密钥相当}track_script {//定义需要追踪执行的脚本magent//执行的脚本名}virtual_ipaddress {//VIP地址定义192.168.1.254//指定使用的VIP}}
Writing magent monitoring scripts (magent_1)
vim /opt/magent.sh#!/bin/bash//定义Shell解释器KEEPALIVED=`ps -ef |grep keepalived |grep -v grep |wc -l`//定义变量,用于统计当前主机keepalived服务进程数量if [ $KEEPALIVED -gt 0 ];then//判断,如KEEPALIVED变量值大于KEEPALIVED=`ps -ef |grep keepalived |grep -v grep |wc -l`0,则表示已产生keepalived进程、服务已经状态开启magent -u root -n 51200 -l 192.168.1.254 -p 12000 -s 192.168.1.100:11211 -b 192.168.1.200:11211//当Keepalived服务开启时,启动magent进程,指定群集IP地址及端口,并定义后端主Memcached和从Memcached服务器节点及端口,当前主机开始提供服务else//如KEEPALIVED变量值小于0,则表示Keppalived停止或未运行pkill -9 magent//杀死magent进程(如果之前启动过的话),以让从节点启动magent并继续提供服务fi
chmod +x /opt/magent.sh
/etc/init.d/keepalived start
ip a
ps aux | grep magent
Magent_2
1. Environmental preparedness (MAGENT_2)
vim /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0 //网卡名称TYPE=Ethernet //网卡类型为以太网ONBOOT=yes //开机自启该网卡NM_CONTROLLED=no //关闭NetworkManagerBOOTPROTO=static //网卡设置为静态方式IPADDR=192.168.1.200 //IP地址配置NETMASK=255.255.255.0 //子网掩码配置cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1DEVICE=eth1 //网卡名称TYPE=Ethernet //网卡类型为以太网ONBOOT=yes //开机自启该网卡NM_CONTROLLED=no //关闭NetworkManagerBOOTPROTO=dhcp //网卡设置为动态方式
/etc/init.d/network restart //重启网络服务
2. Dependent installation (magent_2)
yum clean all && yum repolist //清除YUM缓存并重新生成
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
yum -y install libevent-devel
3. Installing Magent (magent_2)
Since the Magent_1 host has already installed and generated the magent command, magent_2 just copy the command to the local path path
scp [email protected]:/usr/bin/magent /usr/bin/
Note: Check that both sides of the host install the Openssh-clients package
Installing Keepalived (magent_2)
yum -y install keepalived //YUM安装Keepalived软件包
, as Magent_1 's keepalived has been configured, just copy to local, modify to use
scp [email protected]:/etc/keepalived/keepalived.conf /etc/keepalived/
vim /etc/keepalived/keepalived.conf
Writing magent monitoring scripts (magent_2)
vim /opt/magent.sh#!/bin/bash//定义Shell解释器VIP=$(ip a |grep 192.168.1.254 | wc -l)//定义变量,用于查看当前是否存在VIP(有即代表主故障,VIP转移到从主机)if [ $VIP -gt 0 ];then//判断,如VIP变量值大于0,则表示VIP已经转移,当前主机继续提供服务magent -u root -n 51200 -l 192.168.1.254 -p 12000 -s 192.168.1.100:11211 -b 192.168.1.200:11211//当VIP监听时,启动magent进程,指定群集IP地址及端口,并定义后端主Memcached和从Memcached服务器节点及端口,当前主机开始提供服务else//如VIP变量值小于0,则表示VIP未监听,主节点依然正常工作着pkill -9 magent//杀死magent进程(如果之前启动过的话),以让主节点启动magent并继续提供服务fi
chmod +x /opt/magent.sh
/etc/init.d/keepalived start
Test
1. Test High Availability
/etc/init.d/keepalived stop //停止主节点Keepalived,VIP转移
ip a //主节点故障,VIP自动转移到从节点
2. Environment preparation (test machine)
vim /etc/sysconfig/network-scripts/ifcfg-eth0DEVICE=eth0 //网卡名称TYPE=Ethernet //网卡类型为以太网ONBOOT=yes //开机自启该网卡NM_CONTROLLED=no //关闭NetworkManagerBOOTPROTO=static //网卡设置为静态方式IPADDR=192.168.1.111 //IP地址配置NETMASK=255.255.255.0 //子网掩码配置
/etc/init.d/network restart //重启网络服务
3. Cache Testing
yum clean all && yum repolist //清除YUM缓存并重新生成
mount /dev/cdrom /mnt/ //挂载光盘到/mnt/目录
yum -y install telnet //安装远程登陆软件
4.VIP Test
telnet 192.168.1.254 12000 //连接Magent的VIP及端口 Trying 192.168.1.254... Connected to 192.168.1.254. Escape character is ‘^]‘.set test 0 0 4 //新建键值test,长度4位 hehe //test键值数据内容 STOREDquit //退出 Connection closed by foreign host.telnet 192.168.1.254 12000 //再次连接Magent的VIP及端口 Trying 192.168.1.254... Connected to 192.168.1.254. Escape character is ‘^]‘.get test //查看刚插入test键值内容VALUE test 0 4hehe ENDquit //退出 Connection closed by foreign host.
5. Master Node test
telnet 192.168.1.100 11211 //连接主节点及端口 Trying 192.168.1.100... Connected to 192.168.1.100. Escape character is ‘^]‘.get test //查看主节点是否能得到数据VALUE test 0 4hehe ENDquit //退出 Connection closed by foreign host.
6. Test from the node
telnet 192.168.1.200 11211 //连接从节点及端口 Trying 192.168.1.200... Connected to 192.168.1.200. Escape character is ‘^]‘.get test //查看从节点是否能得到数据VALUE test 0 4hehe ENDquit //退出 Connection closed by foreign host.
Note: You can see that the data inserted into the VIP, the primary and standby cache nodes have manually inserted values
Simulating primary node failure
pkill memcached //杀死主节点Memcached进程
telnet 192.168.1.100 12000 //连接Magent的VIP及端口 Trying 192.168.1.100... Connected to 192.168.1.100. Escape character is ‘^]‘.get test //验证VIP是否还能查询到数据VALUE test 0 4hehe ENDquit //退出 Connection closed by foreign host. 以上说明 memcached 单节点故障,缓存依然存在
Note: If the failed memcached node is repaired, the cache is no longer passed on to the repaired node, and if the primary node specified by magent fails, the cache data for the primary node is lost and cannot be restarted immediately after repair
memcached service, if restart, the client will go to query the master node data, the high-level site will drag the database;
Therefore, it is recommended to start the Memcached Master node service when the general business low peak period, and then through Magent again refer
Host cache node and standby cache node
Magent+keepalived+memcached cache high-availability clusters