The distributed solution for Redis--codis

Source: Internet
Author: User
Tags redis server

Codis is an open source distributed server for pea pods. Now in a stable phase.

Original address: Https://github.com/wandoulabs/codis/blob/master/doc/tutorial_zh.md

Codis is a distributed Redis solution, for upper-level applications, there is no obvious difference between connecting to Codis Proxy and connecting to native Redis Server (unsupported list of commands), which can be used like a single redis, Codis The bottom layer will handle the request forwarding, non-stop data migration and so on, all behind everything, for the front of the client is transparent, can simply feel behind the connection is a memory unlimited Redis service.

The basic framework such as the following:

The Codis consists of four parts:

Codis Proxy (Codis-proxy)
Codis Manager (Codis-config)
Codis Redis (Codis-server)
ZooKeeper

Codis-proxy is a client-connected Redis Proxy service, and Codis-proxy itself implements the Redis protocol, which is no different than a native Redis (like Twemproxy), and can deploy multiple codi for a business S-proxy, the codis-proxy itself is stateless.

Codis-config is a CODIS management tool that supports including, joining/removing REDIS nodes, adding/removing Proxy nodes, initiating data migration, and so on. Codis-config itself also comes with an HTTP server that launches a dashboard that allows the user to see the execution state of the Codis cluster directly on the browser.

Codis-server is a Redis branch maintained by the Codis project, based on 2.8.13 development, which adds support for slots and data migration directives for atoms. The Codis Upper Codis-proxy and Codis-config can only perform properly with this version of the Redis interaction competency.

ZooKeeper (hereinafter referred to as ZK) is a distributed coordination Service framework. The ability to achieve strong data consistency between nodes. The simple understanding is that after a node changes the value of a variable. In other nodes can be the latest changes. Such a change is transactional.

By using the listener on the ZK node, the data changes can be obtained.

Codis relies on ZooKeeper to store meta-information for data routing tables and Codis-proxy nodes, and Codis-config commands are synchronized through ZooKeeper to each surviving codis-proxy.

Note: 1.codis new version number supports Redis to 2.8.21

2.codis-group enables the horizontal expansion of Redis

Let's deploy the environment:

10.80.80.124 zookeeper_1 Codis-configcodis-server-master,slavecodis_proxy_1

10.80.80.126 zookeeper_2 Codis-server-master,slavecodis _proxy_2

10.80.80.123 zookeeper_3 Codis-serve-master,slavecodis _proxy_3

Description

1. To ensure the stability and reliability of the zookeeper. We build zookeeper clusters on 124, 126 and 123 to provide services to the outside world;

2.codis-cofig as a management tool for distributed Redis. management tasks can be completed with only one need in the entire distributed server.

3.codis-server and Codis-proxy provide Redis and proxy services on 3 servers.

I. Deploying Zookeeper clusters

1. Configure the hosts (on 3 servers)

10.80.80.124 Codis1
10.80.80.126 Codis2
10.80.80.123 Codis3

2. Configuring the Java Environment (on 3 servers)

vim/etc/profile# #JAVA # # #export Java_home=/usr/local/jdk1.7.0_71export Jre_home=/usr/local/jdk1.7.0_71/jreexport Path= $JAVA _home/bin: $PATHexport classpath=.: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jarsource/etc/profile

3. Install the Zookeeper (on 3 servers)

Cd/usr/local/srcwget HTTP://MIRROR.BIT.EDU.CN/APACHE/ZOOKEEPER/ZOOKEEPER-3.4.6/ZOOKEEPER-3.4.6.TAR.GZTAR-ZXVF Zookeeper-3.4.6.tar.gz-c/usr/local

4. Configure environment variables (on 3 servers)

Vim/etc/profile#zookeeperzookeeper_home=/usr/local/zookeeper-3.4.6export path= $PATH: $ZOOKEEPER _home/binsource/ Etc/profile

5. Change the Zookeeper configuration file (on 3 servers)

#创建zookeeper的数据文件夹和日志文件夹mkdir-P/DATA/ZOOKEEPER/ZK1/{DATA,LOG}CD/USR/LOCAL/ZOOKEEPER-3.4.6/CONFCP zoo_sample.cfg zoo.cfgvim/etc/zoo.cfgticktime=2000initlimit=10synclimit=5datadir=/data/zookeeper/zk1/datadatalogdir=/data/ zookeeper/zk1/logclientport=2181server.1=codis1:2888:3888server.2=codis2:2888:3888server.3=codis3:2888:3888

6. Create the myID file under DataDir. corresponding Node ID (on 3 servers)

#在124上 Cd/data/zookeeper/zk1/dataecho 1 > myid# on 126 cd/data/zookeeper/zk1/dataecho 2 > myid# on 123 cd/data/zookee Per/zk1/dataecho 3 > myID

7. Start the Zookeeper service (on 3 servers)

/usr/local/zookeeper-3.4.6/bin/zkserver.sh start

Note: In your current folder will generate a zookeeper.out log file, which is recorded in the startup process of the specific information, because the cluster does not have all the information, will be reported "myID 2 or myID 3 not started" information, when the cluster all started will be normal, we can ignore.

8. View the status of all zookeeper nodes (on 3 servers)

#124/usr/local/zookeeper-3.4.6/bin/zkserver.sh statusjmx enabled by defaultusing Config:/usr/local/zookeeper-3.4.6/ bin/. /conf/zoo.cfgmode:leader#126/usr/local/zookeeper-3.4.6/bin/zkserver.sh statusjmx enabled by defaultUsing Config:/ usr/local/zookeeper-3.4.6/bin/. /conf/zoo.cfgmode:follower#123/usr/local/zookeeper-3.4.6/bin/zkserver.sh statusjmx enabled by defaultUsing Config:/ usr/local/zookeeper-3.4.6/bin/. /conf/zoo.cfgmode:follower

Two. Deploying Codis clusters

1. Install the Go language (on 3 servers)

TAR-ZXVF go1.4.2.linux-amd64.tar.gz-c/usr/local/

2. Add the Go environment variable (on 3 servers)

Vim/etc/profile#goexport path= $PATH:/usr/local/go/binexport gopath=/usr/local/codissource/etc/profile

3. Install the CODIS (on 3 servers)

Go get github.com/wandoulabs/codiscd $GOPATH/src/github.com/wandoulabs/codis# Run compile test script, compile go and Reids.

./bootstrap.sh make Gotest #将编译好后, copy the bin folder and some scripts in the past/usr/local/codis folder Mkdir-p/usr/local/codis/{conf,redis_conf, Scripts} cp-rf BIN/USR/LOCAL/CODIS/CP Sample/config.ini/usr/local/codis/conf/cp-rf sample/redis_conf/usr/local/cod is CP-RF sample/*/usr/local/codis/scripts

4. Configure the Codis-proxy (on 3 servers. This takes 124 as an example)

#124cd/usr/local/codis/confvim config.inizk=codis1:2181,codis2:2181,codis3:2181product=codis# The dashboard of the graphical interface is configured here. Note that the Codis cluster only has to be one, so all points to 10.80.80.124:18087dashboard_addr=192.168.3.124:18087coordinator=zookeeperbackend_ping_ Period=5session_max_timeout=1800session_max_bufsize=131072session_max_pipeline=128proxy_id=codis_proxy_1#126cd /usr/local/codis/confvim config.inizk=codis1:2181,codis2:2181,codis3:2181product=codis# Here to configure the dashboard of the graphical interface, Note that the Codis cluster only has to be one, so all points to 10.80.80.124:18087dashboard_addr=192.168.3.124:18087coordinator=zookeeperbackend_ping_ Period=5session_max_timeout=1800session_max_bufsize=131072session_max_pipeline=128proxy_id=codis_proxy_2#123cd /usr/local/codis/confvim config.inizk=codis1:2181,codis2:2181,codis3:2181product=codis# Here to configure the dashboard of the graphical interface, Note that the Codis cluster only has to be one, so all points to 10.80.80.124:18087dashboard_addr=192.168.3.124:18087coordinator=zookeeperbackend_ping_ Period=5session_max_timeout=1800session_max_bufsize=131072session_max_pipeline=128proxy_id=codis_proxy_3

5. Change the Codis-server configuration file (on 3 servers)

#创建codis-server Data folder and log folder Mkdir-p/data/codis/codis-server/{data,logs}cd/usr/local/codis/redis_conf# Main Library vim 6380. Confdaemonize yespidfile/var/run/redis_6380.pidport 6379logfile "/data/codis_server/logs/codis_6380.log" Save 900 1save 10save 10000dbfilename 6380.rdbdir/data/codis_server/data# from library cp 6380.conf 6381.confsed-i ' s/6380/6381/g ' 63 81.conf

6. Adding kernel parameters

echo "vm.overcommit_memory = 1" >>  /etc/sysctl.confsysctl-p

7. Follow the start-up process

Cat/usr/loca/codis/scripts/usage.md0. Start Zookeeper 1. Change config items in Config.ini 2. ./start_dashboard.sh 3. ./start_redis.sh 4. ./add_group.sh 5. ./initslot.sh 6. ./start_proxy.sh 7. ./set_proxy_online.sh 8. Open browser to Http://localhost:18087/admin

Although the Scripts folder has a corresponding startup script below, it can be started with startall.sh all. However, it is recommended to start manually to familiarize yourself with the Codis startup process.

Since the zookeeper has been started before, let's start other projects.

Note: 1. You need to specify the relevant log folder or profile folder during startup to facilitate unified management. We all put under the/data/codis;

2.dashboard in the Codis cluster only need to start on one server, here to start on 124, all with codis_config command is on 124, other startup items need to operate on 3 servers.

Related commands such as the following:

/usr/local/codis/bin/codis-config-husage:codis-config  [-c <config_file>] [-L <LOG_FILE>] [-- log-level=<loglevel>]<command> [<args> ...] Options:   -cset config file   -lset output log file, default is stdout   --log-level=<loglevel>set log Level:info, warn, error, debug [default:info]commands:serverslotdashboardactionproxy



(1) Start dashboard (start on 124)

#dashboard的日志文件夹和訪问文件夹mkdir-P/data/codis/codis_dashboard/logscodis_home=/usr/local/codislog_path=/data/codis/ Codis_dashboard/logsnohup $codis _home/bin/codis-config-c $codis _home/conf/config.ini-l $log _path/dashboard.log Dashboard--addr=:18087--http-log= $log _path/requests.log &>/dev/null &

Access to the graphical management interface via 10.80.80.124:18087

(2) Start Codis-server (on 3 servers)

/usr/local/codis/bin/codis-server/data/codis_server/conf/6380.conf/usr/local/codis/bin/codis-server/data/codis _server/conf/6381.conf

(3) Join the Redis Server Group (124)

Note: each server group exists as a Redis Server team and only agrees to have a master that can have multiple slave, and the group ID only supports integers greater than or equal to 1

Now we have 3 groups on 3 servers, so we need to join 3 groups. Each group is composed of one Master one from

#相关命令/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config.ini serverusage:codis-config Server Listcodis-config server Add <group_id> <redis_addr> <role>codis-config Server Remove <group_id > <redis_addr>codis-config Server Promote <group_id> <redis_addr>codis-config server Add-group <group_id>codis-config server Remove-group <group_id> #group 1/usr/local/codis/bin/codis-config-c/usr/ Local/codis/conf/config.ini Server Add 1 10.80.80.124:6380 master/usr/local/codis/bin/codis-config-c/usr/local/ Codis/conf/config.ini Server Add 1 10.80.80.124:6381 slave#group 2/usr/local/codis/bin/codis-config-c/usr/local/ Codis/conf/config.ini Server Add 2 10.80.80.126:6380 master/usr/local/codis/bin/codis-config-c/usr/local/codis/conf /config.ini Server Add 2 10.80.80.126:6381 slave#group 3/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/ Config.ini Server Add 3 10.80.80.123:6380 master/usr/local/codis/bin/codis-config-c/USR/local/codis/conf/config.ini server add 3 10.80.80.123:6381 slave 

Note:1. Clicking "Promote to Master" will elevate the slave redis to master, and the original master will go offline on its own initiative.

2./usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config.ini Server Add is able to join the machine to the corresponding group. You can also update the master/slave role of Redis.

3. If it is a new machine, the keys here should be empty

(4) Set the slot range for the server Group Service (124)

#相关命令/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config.ini slotusage:codis-config slot init [-f] Codis-config Slot Info <slot_id>codis-config slot set <slot_id> <group_id> <status> Codis-config slot Range-set <slot_from> <slot_to> <group_id> <status>codis-config slot Migrate <slot_from> <slot_to> <group_id> [--delay=<delay_time_in_ms>]codis-config slot rebalance [-- Delay=<delay_time_in_ms>] #Codis the use of pre-sharding technology to achieve data fragmentation, the default is divided into 1024 slots (0-1023), for each key, the following formula to determine the respective Slo T id:slotid = CRC32 (key)% 1024 each slot will have a specific server group Id to indicate which server group the data for this slot is provided by.

Here we divide 1024 slots into three segments, such as the following:/usr/local/codis/bin/codis-config-c Conf/config.ini slot Range-set 0 340 1 online/usr/local /codis/bin/codis-config-c Conf/config.ini Slot Range-set 341 681 2 online/usr/local/codis/bin/codis-config-c conf/conf Ig.ini slot Range-set 682 1023 3 Online

(5) Start Codis-proxy (on 3 servers)

#codis_proxy的日志文件夹mkdir-P/data/codis/codis_proxy/logscodis_home=/usr/local/codislog_path=/data/codis/codis_ Proxy/logsnohup $codis _home/bin/codis-proxy--log-level warn-c $codis _home/conf/config.ini-l $log _path/codis_proxy_ 1.log  --cpu=8--addr=0.0.0.0:19000--http-addr=0.0.0.0:11000 > $log _path/nohup.out 2>&1 &

Black line: Codis reads the host name of the server.

Note: If the client is associated with a proxy, you need to join the hosts

(6) on-line Codis-proxy

Codis_home=/usr/local/codislog_path=/data/codis/codis_proxy/logsnohup $codis _home/bin/codis-proxy--log-level Warn-c $codis _home/conf/config.ini-l $log _path/codis_proxy_1.log  --cpu=8--addr=0.0.0.0:19000--http-addr= 0.0.0.0:11000 > $log _path/nohup.out 2>&1 &


Note: After starting codis_proxy. Proxy is in the offline state at this time. Unable to provide services to the outside, must be on-line after the ability to provide services to external.

OK, so Codis has been able to provide services to the outside.

Three. HA

The ha of Codis is divided into the ha of the front proxy and the ha of the backend codis-server, in this case the ha of the proxy is simple.

For the upper proxy, especially Javaclient, Codis provides Jodis (changed Jedis) to implement the proxy ha.

It provides real-time access to the currently available proxy list by monitoring the information on ZK, ensuring high availability. It is also possible to load balance by taking turns requesting all proxies, and supporting Proxy's own active online and offline.

Http://www.cnblogs.com/yxwkf/p/5199019.html

Redis's Distributed Solution--codis (RPM)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.