Redis Distributed Solution-codis and redis-codis

Source: Internet
Author: User
Tags install go redis server

Redis Distributed Solution-codis and redis-codis

Codis is an open-source distributed server with pods and is currently in a stable stage.

Address: https://github.com/wandoulabs/codis/blob/master/doc/tutorial_zh.md

Codis is a distributed Redis solution. For upper-layer applications, there is no significant difference between connecting to the Codis Proxy and connecting to the native Redis Server (list of unsupported commands ), upper-layer applications can be used like single-host Redis. Codis processes request forwarding and data migration without stopping services, the client is transparent, and you can simply think that the backend connection is a Redis service with infinite memory.

The basic framework is as follows:

Codis consists of four parts:

Codis Proxy (codis-proxy)
Codis Manager (codis-config)
Codis Redis (codis-server)
ZooKeeper

Codis-proxy is the Redis proxy service connected to the client. codis-proxy implements the Redis protocol, which is similar to a native Redis (like Twemproxy ), for a business, multiple codis-proxy can be deployed. codis-proxy itself is stateless.

Codis-config is a Codis management tool that supports operations such as adding/deleting Redis nodes, adding/deleting Proxy nodes, and initiating data migration. codis-config also comes with an http server that starts a dashboard. You can view the running status of the Codis cluster in a browser.

Codis-server is a Redis Branch maintained by the Codis project. It is developed based on 2.8.13 and has added slot support and atomic data migration commands. codis-proxy and codis-config on the top of codis can only interact with Redis of this version to run normally.

ZooKeeper (ZK for short) is a Distributed Coordination Service framework that enables strong data consistency between nodes. A simple understanding is that after a node modifies the value of a variable, it can be updated on other nodes. This change is transactional. By registering a listener on the ZK node, you can obtain data changes.

Codis relies on ZooKeeper to store the metadata of the Data route table and codis-proxy node. commands initiated by codis-config are synchronized to each surviving codis-proxy through ZooKeeper.

Note: 1. The new version of codis supports redis to 2.8.21

2. codis-group for horizontal scaling of redis

Next we will deploy the environment:

10.80.80.124 zookeeper_1 codis-configcodis-server-master, slavecodis_proxy_1

10.80.80.126 zookeeper_2 codis-server-master, slavecodis _ proxy_2

10.80.80.123 zookeeper_3 codis-serve-master, slavecodis _ proxy_3

Note:

1. To ensure the stability and reliability of zookeeper, we set up a zookeeper cluster on 124, 126, and 123 to provide external services;

2. codis-cofig, as a management tool for Distributed redis, can complete management tasks only one distributed server;

3. codis-server and codis-proxy provide redis and proxy services on three servers.


1. Deploy the zookeeper Cluster

1. Configure hosts (on three servers)

10.80.80.124 codis1
10.80.80.126 codis2
10.80.80.123 codis3

2. Configure the java environment (on three servers)

vim /etc/profile##JAVA###export JAVA_HOME=/usr/local/jdk1.7.0_71export JRE_HOME=/usr/local/jdk1.7.0_71/jreexport PATH=$JAVA_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarsource /etc/profile
3. Install zookeeper (on 3 servers)

cd /usr/local/srcwget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gztar -zxvf zookeeper-3.4.6.tar.gz -C /usr/local
4. Configure environment variables (on three servers)

vim /etc/profile#zookeeperZOOKEEPER_HOME=/usr/local/zookeeper-3.4.6export PATH=$PATH:$ZOOKEEPER_HOME/binsource /etc/profile
5. Modify the zookeeper configuration file (on three servers)

# Create the zookeeper data DIRECTORY And log directory mkdir-p/data/zookeeper/zk1/{data, log} cd/usr/local/zookeeper-3.4.6/confcp zoo_sample.cfg zoo. export Vim/etc/zoo. cfgtickTime = 2000 initLimit = 10 syncLimit = 5 dataDir =/data/zookeeper/zk1/datadataLogDir =/data/zookeeper/zk1/logclientPort = 2181server. 1 = codis1: 2888: 3888server. 2 = codis2: 2888: 3888server. 3 = codis3: 2888: 3888
6. Create a myid file under dataDir, corresponding node id (on 3 servers)

# Cd/data/zookeeper/zk1/dataecho 1> myid # cd/data/zookeeper/zk1/dataecho 2> myid # cd/data/on 124/ zookeeper/zk1/dataecho 3> myid
7. Start the zookeeper service (on three servers)

/usr/local/zookeeper-3.4.6/bin/zkServer.sh start
Note: a zookeeper is generated under your current directory. the out log file records the detailed information during the startup process. Because the cluster does not have all the information, the message "myid 2 or myid 3 is not started" is reported, when all the clusters are started, they will be normal and can be ignored.

8. view the status of all zookeeper nodes (on 3 servers)

#124/usr/local/zookeeper-3.4.6/bin/zkServer.sh statusJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: leader#126/usr/local/zookeeper-3.4.6/bin/zkServer.sh statusJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower#123/usr/local/zookeeper-3.4.6/bin/zkServer.sh statusJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower


2. Deploy the codis Cluster

1. Install go language (on 3 servers)

tar -zxvf go1.4.2.linux-amd64.tar.gz -C /usr/local/
2. Add the go environment variable (on three servers)

vim /etc/profile#goexport PATH=$PATH:/usr/local/go/binexport GOPATH=/usr/local/codissource /etc/profile
3. Install codis (on 3 servers)

Go get github.com/wandoulabs/codiscd $ GOPATH/src/github.com/wandoulabs/codis?execute compilation and testing scripts to compile GOs and reids. . /Bootstrap. after shmake gotest # is compiled, copy the bin directory and some scripts to the mkdir-p/usr/local/codis directory mkdir-p/usr/local/codis/{conf, redis_conf, scripts} cp-rf bin/usr/local/codis/cp sample/config. ini/usr/local/codis/conf/cp-rf sample/redis_conf/usr/local/codiscp-rf sample/*/usr/local/codis/scripts
4. Configure codis-proxy (on three servers, take 124 as an example)

# 124cd/usr/local/codis/confvim config. inizk = codis1: 2181, codis2: 2181, codis3: 2181 product = codis # configure the dashboard of the graphic interface here. Note that only one codis cluster is required, so all points to 10.80.80.124: 18087dashboard_addr = 192.168: 18087 coordinator = servers = 5session_max_timeout = 1800session_max_bufsize = 131072session_max_pipeline = 128proxy_id = codis_proxy_1 # 126cd/usr/local/codis/confvim config. inizk = codis1: 2181, codis2: 2181, codis3: 2181 product = codis # configure the dashboard of the graphic interface here. Note that only one codis cluster is required, so all points to 10.80.80.124: 18087dashboard_addr = 192.168: 18087 coordinator = servers = 5session_max_timeout = 1800session_max_bufsize = 131072session_max_pipeline = 128proxy_id = codis_proxy_2 # 123cd/usr/local/codis/confvim config. inizk = codis1: 2181, codis2: 2181, codis3: 2181 product = codis # configure the dashboard of the graphic interface here. Note that only one codis cluster is required, so all points to 10.80.80.124: 18087dashboard_addr = 192.168: 18087 coordinator = zookeeperbackend_ping_period = 5session_max_timeout = 1800session_max_bufsize = 131072session_max_pipeline = 128proxy_id = codis_proxy_3
5. modify the configuration file of codis-server (on three servers)

# Create the data DIRECTORY And log directory mkdir-p/data/codis-server/{data, logs} cd/usr/local/codis/redis_conf # vim 6380, the master database. confdaemonize yespidfile/var/run/redis_6380.pidport 6379 logfile "/data/codis_server/logs/codis_6380.log" save 900 1 save 300 10 save 60 10000 dbfilename 6380. rdbdir/data/codis_server/data # slave database cp 6380. conf 6381. confsed-I's/6380/6381/G' 6381. conf
6. Add Kernel Parameters

echo "vm.overcommit_memory = 1" >>  /etc/sysctl.confsysctl -p
7. Start according to the Startup Process

cat /usr/loca/codis/scripts/usage.md0. start zookeeper 1. change config items in config.ini 2. ./start_dashboard.sh 3. ./start_redis.sh 4. ./add_group.sh 5. ./initslot.sh 6. ./start_proxy.sh 7. ./set_proxy_online.sh 8. open browser to http://localhost:18087/admin
Although there is a corresponding STARTUP script under the scripts directory, you can also start it with startall. sh, but we recommend that you start it manually at the beginning to familiarize yourself with the codis startup process.

Since zookeeper has been started before, let's start other projects.

Note: 1. You need to specify the log directory or configuration file directory during the startup process. To facilitate unified management, we will put it under/data/codis;

2. dashboard can be started on only one server in the codis cluster. It can be started on 124. All commands using codis_config are performed on 124, other startup items must be operated on three servers.

The related commands are as follows:

/usr/local/codis/bin/codis-config -husage: codis-config  [-c <config_file>] [-L <log_file>] [--log-level=<loglevel>]<command> [<args>...]options:   -cset config file   -Lset output log file, default is stdout   --log-level=<loglevel>set log level: info, warn, error, debug [default: info]commands:serverslotdashboardactionproxy


(1) Start dashboard (started on 124)

# Dashboard log directory and Access Directory mkdir-p/data/codis/codis_dashboard/logscodis_home =/usr/local/codislog_path =/data/codis/codis_dashboard/logsnohup $ codis_home/bin/ codis-config-c $ codis_home/conf/config. ini-L $ log_path/dashboard. log dashboard -- addr =: 18087 -- http-log = $ log_path/requests. log &>/dev/null &

Access the graphic management interface through 10.80.80.124: 18087


(2) Start codis-server (on 3 servers)

/usr/local/codis/bin/codis-server /data/codis_server/conf/6380.conf/usr/local/codis/bin/codis-server /data/codis_server/conf/6381.conf
(3) Add a Redis Server Group (on 124)

Note:Each Server Group exists as a Redis Server group. Only one master is allowed and multiple slave instances are allowed. The Group id can only be an integer greater than or equal to 1.

Currently, we have divided three groups on three servers. Therefore, we need to add three groups, each consisting of one master and one slave.

# Related commands/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config. ini serverusage: codis-config server listcodis-config server add <group_id> <redis_addr> <role> codis-config server remove <group_id> <redis_addr> codis-config server promote <group_id> <redis_addr> codis-config server add-group <group_id> codis-config server remove-group <group_id> # group 1/usr/local/codis/bin/codis-config-c/usr/ local/codis/conf/config. ini server add 1 10.80.80.124: 6380 master/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config. ini server add 1 10.80.80.124: 6381 slave # group 2/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config. ini server add 2 10.80.80.126: 6380 master/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config. ini server add 2 10.80.80.126: 6381 slave # group 3/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config. ini server add 3 10.80.80.123: 6380 master/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config. ini server add 3 10.80.80.123: 6381 slave

Note:1. Click "Promote to Master" to Promote the slave redis to the master, and the original master will automatically go offline.

2. /usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config. ini server add can add machines to the corresponding group, or update the Master/Slave role of redis.

3. If it is a new machine, the keys here should be empty

(4) set the slot range of the server group service (above 124)

# Related commands/usr/local/codis/bin/codis-config-c/usr/local/codis/conf/config. ini slotusage: codis-config slot init [-f] codis-config slot info <slot_id> codis-config slot set <slot_id> <group_id> <status> codis-config slot range-set <slot_from> <slot_to> <group_id> <status> codis-config slot migrate <slot_from> <slot_to> <group_id> [-- delay = <delay_time_in_ms>] codis-config slot rebalance [-- delay = <delay_time_ I N_ms>] # Codis uses the Pre-sharding technology to implement data sharding. By default, it is divided into 1024 slots (0-1023). For each key, use the following formula to determine the Slot Id: SlotId = crc32 (key) % 1024 each slot will have a specific server group id to indicate which server group the slot data is provided. Here we divide 1024 slots into three sections and allocate them as follows:/usr/local/codis/bin/codis-config-c conf/config. ini slot range-set 0 340 1 online/usr/local/codis/bin/codis-config-c conf/config. ini slot range-set 341 681 2 online/usr/local/codis/bin/codis-config-c conf/config. ini slot range-set 682 1023 3 online
(5) Start codis-proxy (on 3 servers)

# The log directory mkdir-p/data/codis/codis_proxy/logscodis_home =/usr/local/codislog_path =/data/codis/codis_proxy/logsnohup $ codis_home/bin/codis-proxy -- log-level warn-c $ codis_home/conf/config. ini-L $ log_path/codis_proxy_1.log -- cpu = 8 -- addr = 0.0.0.0: 19000 -- http-addr = 0.0.0.0: 11000> $ log_path/nohup. out 2> & 1 &

Black Line: codis reads the Host Name of the server.

Note: If the client accesses the proxy, you must add the hosts on the client.

(6) Publish codis-proxy

codis_home=/usr/local/codislog_path=/data/codis/codis_proxy/logsnohup $codis_home/bin/codis-proxy --log-level warn -c $codis_home/conf/config.ini -L $log_path/codis_proxy_1.log  --cpu=8 --addr=0.0.0.0:19000 --http-addr=0.0.0.0:11000 > $log_path/nohup.out 2>&1 &

Note: After codis_proxy is started, the proxy is in the offline status and cannot provide external services. You must publish the proxy before providing external services.


OK. Now codis can provide external services.


Iii. HA

The ha of codis is divided into the ha of the front-end proxy and the ha of the backend codis-server.

For upper-layer proxies, especially java clients, codis provides jodis (modified jedis) to implement proxy ha. It monitors the registration information on zk to obtain the list of currently available proxies in real time, which can ensure high availability or achieve Load Balancing by requesting all proxies in turn; supports automatic proxy launch and removal.



Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.