Redis's Distributed Solution--codis

Source: Internet
Author: User
Tags zookeeper redis server

Codis is an open source distributed server for pea pods and is currently in a stable phase.

Original address: Https://github.com/wandoulabs/codis/blob/master/doc/tutorial_zh.md

Codis is a distributed Redis solution, for upper-level applications, there is no obvious difference between connecting to Codis Proxy and connecting to native Redis Server (unsupported list of commands), and upper-level applications can be used like a single redis, Codis The bottom layer will handle the forwarding of requests, non-stop data migration, and so on, all the things behind, for the front of the client is transparent, you can simply think behind the connection is an infinite memory of the Redis service.

The basic framework is as follows:

The Codis consists of four parts:

Codis Proxy (Codis-proxy)
Codis Manager (Codis-config)
Codis Redis (Codis-server)
ZooKeeper

Codis-proxy is a client-attached Redis proxy service, and Codis-proxy itself implements the Redis protocol, which is no different than a native Redis (like Twemproxy), where multiple codis-p can be deployed for a business Roxy, the codis-proxy itself is stateless.

Codis-config is a CODIS management tool that supports including, adding/removing REDIS nodes, adding/removing Proxy nodes, initiating data migration, and so on. The codis-config itself also has an HTTP server, which launches a dashboard that allows the user to observe the running state of the CODIS cluster directly on the browser.

Codis-server is a Redis branch maintained by the Codis project, based on 2.8.13 development, with slot support and atomic data migration directives. Codis Upper Codis-proxy and Codis-config can only interact with this version of Redis for normal operation.

ZooKeeper (ZK) is a distributed coordination Service framework that enables strong data consistency between nodes. The simple understanding is that after a node modifies the value of a variable, the change is transactional in that the other nodes can be up-to-date. Data changes can be obtained by registering the listener on the ZK node.

Codis relies on ZooKeeper to store meta-information for data routing tables and Codis-proxy nodes, and Codis-config commands are synchronized through ZooKeeper to each surviving codis-proxy.

Note: The new version of 1.codis supports Redis to 2.8.21

2.codis-group enables the horizontal expansion of Redis

Let's deploy the environment below:

10.80.80.124 zookeeper_1 codis-configcodis-server-master,slavecodis_proxy_1

10.80.80.126 zookeeper_2 codis-server-master,slaveCodis _proxy_2

10.80.80.123 zookeeper_3 codis-serve-master,slaveCodis _proxy_3

Description

1. In order to ensure the stability and reliability of the zookeeper, we build zookeeper clusters on 124, 126 and 123 to provide services externally;

2.codis-cofig as the management tool of distributed Redis, it can complete management task in the whole distributed server only need one;

3.codis-server and Codis-proxy provide Redis and proxy services on 3 servers.


I. Deploying Zookeeper clusters

1. Configure the hosts (on 3 servers)

10.80.80.124 Codis1
10.80.80.126 Codis2
10.80.80.123 Codis3

2. Configuring the Java Environment (on 3 servers)

vim/etc/profile# #JAVA # # #export Java_home=/usr/local/jdk1.7.0_71export Jre_home=/usr/local/jdk1.7.0_71/jreexport Path= $JAVA _home/bin: $PATHexport classpath=.: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jarsource/etc/profile
3. Install the Zookeeper (on 3 servers)

Cd/usr/local/srcwget HTTP://MIRROR.BIT.EDU.CN/APACHE/ZOOKEEPER/ZOOKEEPER-3.4.6/ZOOKEEPER-3.4.6.TAR.GZTAR-ZXVF Zookeeper-3.4.6.tar.gz-c/usr/local
4. Configure environment variables (on 3 servers)

Vim/etc/profile#zookeeperzookeeper_home=/usr/local/zookeeper-3.4.6export path= $PATH: $ZOOKEEPER _home/binsource/ Etc/profile
5. Modify the Zookeeper configuration file (on 3 servers)

#创建zookeeper的数据目录和日志目录mkdir-P/DATA/ZOOKEEPER/ZK1/{DATA,LOG}CD/USR/LOCAL/ZOOKEEPER-3.4.6/CONFCP zoo_sample.cfg zoo.cfgvim/etc/zoo.cfgticktime=2000initlimit=10synclimit=5datadir=/data/zookeeper/zk1/datadatalogdir=/data/ zookeeper/zk1/logclientport=2181server.1=codis1:2888:3888server.2=codis2:2888:3888server.3=codis3:2888:3888
6. Create the myID file under DataDir, corresponding to the node ID (on 3 servers)

#在124上 Cd/data/zookeeper/zk1/dataecho 1 > myid# on 126 cd/data/zookeeper/zk1/dataecho 2 > myid# on 123 cd/data/zookee Per/zk1/dataecho 3 > myID
7. Start the Zookeeper service (on 3 servers)

/usr/local/zookeeper-3.4.6/bin/zkserver.sh start
Note: In your current directory will generate a zookeeper.out log file, which records the details of the boot process, because the cluster does not have all the information, will be reported "myID 2 or myID 3 is not started" information, when the cluster all started will be normal, we can ignore.

8. View the status of all nodes in the zookeeper (on 3 servers)

#124/usr/local/zookeeper-3.4.6/bin/zkserver.sh statusjmx enabled by defaultusing Config:/usr/local/zookeeper-3.4.6/ bin/. /conf/zoo.cfgmode:leader#126/usr/local/zookeeper-3.4.6/bin/zkserver.sh statusjmx enabled by defaultUsing Config:/ usr/local/zookeeper-3.4.6/bin/. /conf/zoo.cfgmode:follower#123/usr/local/zookeeper-3.4.6/bin/zkserver.sh statusjmx enabled by defaultUsing Config:/ usr/local/zookeeper-3.4.6/bin/. /conf/zoo.cfgmode:follower


two. Deploying Codis clusters

1. Install the Go language (on 3 servers)

TAR-ZXVF go1.4.2.linux-amd64.tar.gz-c/usr/local/
2. Add the Go environment variable (on 3 servers)

Vim/etc/profile#goexport path= $PATH:/usr/local/go/binexport gopath=/usr/local/codissource/etc/profile
3. Install the CODIS (on 3 servers)

Go get github.com/wandoulabs/codiscd $GOPATH/src/github.com/wandoulabs/codis# Execute compile test script, compile go and Reids. Bootstrap.shmake gotest# will compile, put the bin directory and some scripts to copy the past/usr/local/codis directory Mkdir-p/usr/local/codis/{conf,redis_conf, SCRIPTS}CP-RF bin/usr/local/codis/  CP sample/config.ini/usr/local/codis/conf/cp-rf sample/redis_conf/usr/ LOCAL/CODISCP-RF sample/*/usr/local/codis/scripts
4. Configure Codis-proxy (on 3 servers, with 124 for example)

#124cd/usr/local/codis/confvim config.inizk=codis1:2181,codis2:2181,codis3:2181product=codis# Here to configure the graphical interface of the dashboard, note that the Codis cluster as long as one can, so all point to 10.80.80.124:18087dashboard_addr=192.168.3.124:18087coordinator= Zookeeperbackend_ping_period=5session_max_timeout=1800session_max_bufsize=131072session_max_pipeline=128proxy_ Id=codis_proxy_1#126cd/usr/local/codis/confvim config.inizk=codis1:2181,codis2:2181,codis3:2181product=codis# Here to configure the graphical interface of the dashboard, note that the Codis cluster as long as one can, so all point to 10.80.80.124:18087dashboard_addr=192.168.3.124:18087coordinator= Zookeeperbackend_ping_period=5session_max_timeout=1800session_max_bufsize=131072session_max_pipeline=128proxy_ Id=codis_proxy_2#123cd/usr/local/codis/confvim config.inizk=codis1:2181,codis2:2181,codis3:2181product=codis# Here to configure the graphical interface of the dashboard, note that the Codis cluster as long as one can, so all point to 10.80.80.124:18087dashboard_addr=192.168.3.124:18087coordinator= Zookeeperbackend_ping_period=5session_max_timeout=1800session_max_bufsize=131072session_max_pipeline=128proxy_ Id=codis_proxy_3
5. Modify the Codis-server configuration file (on 3 servers)

#创建codis-server Data directory and log directory mkdir-p/data/codis/codis-server/{data,logs}cd/usr/local/codis/redis_conf# Main library vim 6380. Confdaemonize yespidfile/var/run/redis_6380.pidport 6379logfile "/data/codis_server/logs/codis_6380.log" Save 900 1save 10save 10000dbfilename 6380.rdbdir/data/codis_server/data# from library cp 6380.conf 6381.confsed-i ' s/6380/6381/g ' 63 81.conf
6. Adding kernel parameters

echo "vm.overcommit_memory = 1" >>  /etc/sysctl.confsysctl-p
7. Start with the START process

Cat/usr/loca/codis/scripts/usage.md0. Start Zookeeper 1. Change config items in Config.ini 2. ./start_dashboard.sh 3. ./start_redis.sh 4. ./add_group.sh 5. ./initslot.sh 6. ./start_proxy.sh 7. ./set_proxy_online.sh 8. Open browser to Http://localhost:18087/admin
Although the scripts directory has a corresponding startup script, you can start it all with startall.sh, but it is recommended to start manually to familiarize yourself with the Codis startup process.

Since the previous zookeeper has been started, let's start the other projects below.

Note: 1. In the boot process need to specify the relevant log directory or configuration file directory, in order to facilitate unified management, we are placed under the/data/codis;

2.dashboard in the Codis cluster only need to start on one server, here on 124 to start, all with codis_config command is on the 124 operation, other startup items need to operate on 3 servers.

The relevant commands are as follows:

/usr/local/codis/bin/codis-config-husage:codis-config  [-c <config_file>] [-L <LOG_FILE>] [-- log-level=<loglevel>]<command> [<args> ...] Options:   -cset config file   -lset output log file, default is stdout   --log-level=<loglevel>set log Level:info, warn, error, debug [default:info]commands:serverslotdashboardactionproxy


(1) Start dashboard (start on 124)

#dashboard的日志目录和访问目录mkdir-P/data/codis/codis_dashboard/logscodis_home=/usr/local/codislog_path=/data/codis/ Codis_dashboard/logsnohup $codis _home/bin/codis-config-c $codis _home/conf/config.ini-l $log _path/dashboard.log Dashboard--addr=:18087--http-log= $log _path/requests.log &>/dev/null &

Access to the graphical management interface via 10.80.80.124:18087


(2) Start Codis-server (on 3 servers)

/usr/local/codis/bin/codis-server/data/codis_server/conf/6380.conf/usr/local/codis/bin/codis-server/data/codis _server/conf/6381.conf
(3) Add Redis Server Group (124)

Note: each server group exists as a Redis server set, allowing only one master, multiple slave, and a group ID that supports integers greater than or equal to 1

We currently have 3 groups on 3 servers, so we need to add 3 groups, each consisting of one master

#相关命令/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config.ini serverusage:codis-config Server Listcodis-config server Add <group_id> <redis_addr> <role>codis-config Server Remove <group_id > <redis_addr>codis-config Server Promote <group_id> <redis_addr>codis-config server Add-group <group_id>codis-config server Remove-group <group_id> #group 1/usr/local/codis/bin/codis-config-c/usr/ Local/codis/conf/config.ini Server Add 1 10.80.80.124:6380 master/usr/local/codis/bin/codis-config-c/usr/local/ Codis/conf/config.ini Server Add 1 10.80.80.124:6381 slave#group 2/usr/local/codis/bin/codis-config-c/usr/local/ Codis/conf/config.ini Server Add 2 10.80.80.126:6380 master/usr/local/codis/bin/codis-config-c/usr/local/codis/conf /config.ini Server Add 2 10.80.80.126:6381 slave#group 3/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/ Config.ini Server Add 3 10.80.80.123:6380 master/usr/local/codis/bin/codis-config-c/USR/local/codis/conf/config.ini server add 3 10.80.80.123:6381 slave 

Note:1. Clicking "Promote to Master" will promote the slave redis to master, and the original master will automatically go offline.

2./usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config.ini Server Add can add machines to the appropriate group, or you can update the master/slave role of Redis.

3. If it is a new machine, the keys here should be empty

(4) Set the slot range for the server Group Service (124)

 #相关命令/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config.ini Slotusage:codis-config slot Init [-f]codis-config slot info <slot_id>codis-config slot set <slot_id> < Group_id> <status>codis-config Slots Range-set <slot_from> <slot_to> <group_id> <status >codis-config Slots Migrate <slot_from> <slot_to> <group_id> [--delay=<delay_time_in_ms>] Codis-config slot rebalance [--delay=<delay_time_in_ms>] #Codis uses pre-sharding technology to achieve shard of data, by default divided into 1024 slots (0-10 23) For each key, the following formula determines the slot Id:slotid = CRC32 (key)% 1024 each slot will have a specific server group Id to indicate which serve the slot's data is from R Group to provide. Here we divide the 1024 slots into three segments, which are assigned as follows:/usr/local/codis/bin/codis-config-c Conf/config.ini slot Range-set 0 340 1 online/usr/local/ Codis/bin/codis-config-c Conf/config.ini Slot Range-set 341 681 2 online/usr/local/codis/bin/codis-config-c conf/ Config.ini slot Range-set 682 1023 3 Online 
(5) Start Codis-proxy (on 3 servers)

#codis_proxy的日志目录mkdir-P/data/codis/codis_proxy/logscodis_home=/usr/local/codislog_path=/data/codis/codis_ Proxy/logsnohup $codis _home/bin/codis-proxy--log-level warn-c $codis _home/conf/config.ini-l $log _path/codis_proxy_ 1.log  --cpu=8--addr=0.0.0.0:19000--http-addr=0.0.0.0:11000 > $log _path/nohup.out 2>&1 &

Black line: Codis reads the host name of the server.

Note: If the client-related Access Proxy, you need to add the hosts

(6) on-line Codis-proxy

Codis_home=/usr/local/codislog_path=/data/codis/codis_proxy/logsnohup $codis _home/bin/codis-proxy--log-level Warn-c $codis _home/conf/config.ini-l $log _path/codis_proxy_1.log  --cpu=8--addr=0.0.0.0:19000--http-addr= 0.0.0.0:11000 > $log _path/nohup.out 2>&1 &

Note: After starting the Codis_proxy, proxy at this time in the offline state, unable to provide services to the outside, it must be on-line to provide services.


OK, so Codis has been able to provide services to the outside.


Three. HA

The ha of Codis is divided into the ha of the front proxy and the ha of the backend codis-server, in this case the ha of the proxy is simple.

For the upper proxy, especially the Java client, CODIS provides Jodis (modified Jedis) to implement the proxy ha. It will monitor the ZK on the registration information in real time to obtain the currently available proxy list, both to ensure high availability, can also be in turn request all proxies to achieve load balance, support proxy Auto-launch and offline.



Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Redis's Distributed Solution--codis

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.