Codis is a distributed Redis solution

Source: Internet
Author: User
Tags vars


Codis Source Address: Https://github.com/wandoulabs/codis

For Codis components, refer to: https://github.com/wandoulabs/codis/blob/master/doc/tutorial_zh.md


Today's share of this article is purely personal understanding and use of some of the experience, if the mistake also ask a friend to point out.

It is more important to know that some friends who are using or are going to use codis have more or less help.


About the overall architecture and functionality of the CODIS official documentation given in detail, so I do not want to be superfluous.


Because the AWS EC2 host is currently in use, the default current user is Ec2-user, not the root user.


1, install the basic go environment, all nodes are installed.

# sudo yum-y install gcc gcc-c++ make git wget go # sudo vim/etc/profile.d/go.shexport gopath=/opt/mygoexport path= $GOPA Th/bin: $JAVA _home/bin: $PATH # Source/etc/profile


2, install CODIS, except Zookeeper node, the rest of the nodes are installed normally.

# sudo mkdir/opt/mygo# sudo chown-r ec2-user.ec2-user/opt/mygo/# go get-u-D github.com/wandoulabs/codis# Cd/opt/mygo /src/github.com/wandoulabs/codis/# make# make Gotest


3, install zookeeper, only need to install at this node.

# yum -y install java-1.8.0# wget https://www.apache.org/dist/zookeeper/ zookeeper-3.4.7/zookeeper-3.4.7.tar.gz# tar -zxf zookeeper-3.4.7.tar.gz -c /opt#  cd /opt/zookeeper-3.4.7# cp conf/zoo_sample.cfg conf/zoo.cfg# mkdir /data/ {zookeeper,logs} -p# sudo vim conf/zoo.cfgdatalogdir=/data/logsdatadir=/data/ zookeeperserver.1=localhost:2888:3888# vim /data/zookeeper/myid1# vim /etc/profile.d/ Zookeeper.shpath= $PATH:/opt/zookeeper-3.4.7/bin# source /etc/profile# sudo /opt/ zookeeper-3.4.7/bin/zkserver.sh start conf/zoo.cfg# netstat -alnut | grep  2181# nc -v localhost 2181# zkserver.sh status# view the role of zookeeper (leader|follower| Standalone) # zkcli.sh -server 127.0.0.1:2181ls /create /test hellozkget / Testset /test hellozookeeperget /testdeleTe /testget /testquit 


4. Start the Codis-redis service. Only the Redis node is required.

# sudo mkdir/etc/redis# cd/opt/mygo/src/github.com/wandoulabs/codis# sudo./bin/codis-server/etc/redis/redis.conf# sudo netstat-tnlp |grep codis-se


5, operate on the Dashbaord node.

1> Configuring Dashboard Service # cd/opt/mygo/src/github.com/wandoulabs/codis/# mkdir/etc/codis# CP config.ini/etc/codis/ codis-config.ini# Vim vim/etc/codis/codis-config.inizk=172.31.16.33:2181product=cn_release_codisdashboard_addr= Localhost:18087proxy_id=proxy_1proto=tcp42> Start Dashboard Service # cd/opt/mygo/src/github.com/wandoulabs/codis/#./bin /codis-config-c/etc/codis/codis-config.ini dashboard3> Initialize Slots (This command creates slot-related information on zookeeper) # cd/opt/mygo/src/ github.com/wandoulabs/codis/#./bin/codis-config-c/etc/codis/codis-config.ini Slot init4> force formatting slot#./bin/ Codis-config-c/etc/codis/codis-config.ini Slot Init


6. Add Codis-group-redis

> Add First group Codis

#./bin/codis-config-c/etc/codis/codis-config.ini Server add 1 172.31.51.119:6379 master#./bin/codis-config-c/etc/co Dis/codis-config.ini Server add 1 172.31.51.125:6379 slave

> Add a second set of Codis

#./bin/codis-config-c/etc/codis/codis-config.ini Server Add 2 172.31.51.126:6379 master#./bin/codis-config-c/etc/co Dis/codis-config.ini Server add 2 172.31.51.124:6379 slave

> Open Shards

#./bin/codis-config-c/etc/codis/codis-config.ini Slot Range-set 0 511 1 online#./bin/codis-config-c/etc/codis/codis- Config.ini slot Range-set 1023 2 online

> expansion, adding new shards online

#./bin/codis-config-c Codis-config.ini Server add 3 192.168.10.131:6381 master#./bin/codis-config-c Codis-config.ini S erver Add 3 192.168.10.132:6381 slave#/bin/codis-config-c Codis-config.ini slot migrate 256 511 3


7, start the Codis-proxy service.

For example, there are two codis-proxy services on the line.

# cd/opt/mygo/src/github.com/wandoulabs/codis/# mkdir/etc/codis# CP Config.ini/etc/codis/codis-proxy.ini # vim/etc/ Codis/codis-proxy.ini zk=172.31.51.123:2181product=cn_release_codisdashboard_addr=172.31.51.120:18087proxy_id= proxy_1proto=tcp4#./bin/codis-proxy-c/etc/codis/codis-proxy.ini-l/var/log/codis_proxy.log--cpu=1--addr= 172.31.51.122:19000--http-addr=172.31.51.122:11000# cd/opt/mygo/src/github.com/wandoulabs/codis/# mkdir/etc/ codis# CP Config.ini/etc/codis/codis-proxy.ini # Vim/etc/codis/codis-proxy.ini Zk=172.31.51.123:2181product=cn_ release_codisdashboard_addr=172.31.51.120:18087proxy_id=proxy_2proto=tcp4#./bin/codis-proxy-c/etc/codis/ Codis-proxy.ini-l/var/log/codis_proxy.log--cpu=1--addr=172.31.51.121:19000--http-addr=172.31.51.121:11000


8. Dashboard Monitoring Page

http://<dashboard_ip>:18087/admin/

650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M00/78/7B/wKiom1Z9WMyCr7pXAABfLaMu-Ec810.png "title=" Dashboard-01.png "width=" "height=" 362 "border=" 0 "hspace=" 0 "vspace=" 0 "style=" width:800px;height:362px; "alt=" Wkiom1z9wmycr7pxaabflamu-ec810.png "/>


9. Removal of the Shard process

---Assume that shard 3 is removed---1. Set Codis-proxy to offline state ... /bin/codis-config-c Codis-config.ini proxy offline Proxy_12. Migrate the data on Shard 3 to the Shard 1./bin/codis-config-c Codis-config.ini slot migrate 256 511 13. Completely remove shards 3./bin/codis-config-c Codis-config.ini Server Remove-group 3


10, Codis-server ha

# Export gopath=/opt/mygo# go get github.com/ngaut/codis-ha# cp/opt/mygo/bin/codis-ha/opt/mygo/src/github.com/ wandoulabs/codis/bin/# cd/opt/mygo/src/github.com/wandoulabs/codis/#./bin/codis-ha-codis-config= "localhost : 18087 "-log-level=" info "-productname=" Cn_release_codis "


The problems encountered and the solutions, also hope that this part of the useful to friends.

(1)

2015/12/11 16:49:10 dashboard.go:160: [INFO] Dashboard listening on addr:: 180872015/12/11 16:49:10 dashboard.go:234: [PA NIC] Create ZK node Failed[error]: Dashboard already exists: {"addr": "172.31.16.30:18087", "pid": 7762}

Workaround:

This problem occurs because the use of kill-9 causes the dashboard service to terminate abnormally, while exiting the service without clearing its own information on ZK.

So we don't use kill-9 when we stop codis any of the services in the cluster, and we can use kill.

If Kill is used, the service automatically clears the information from the ZK when it is terminated and registers it the next time it is started.


The temporary solution is:

# Rmr/zk/codis/db_codis_proxy_test/dashboard


(2)

API interface provided by dashboard

Http://debugAddr/setloglevel?level=debug

Http://debugAddr/debug/vars #主要是获取ops信息的还可以设置日志级别

The browser accesses proxy's debug_addr corresponding address/debug/vars path, can see each proxy's QPS information.


(3)

An explanation of the information generated in the Codis-proxy service log.

Quit:client Unsolicited quit command

EOF: The connection is disconnected directly, that is, the proxy from the client TCP read the time of the encounter EOF


Codis Each active shutdown client connection will be log, in general, the main may be:

Illegal operation, the request is linked to the underlying Redis, the session has not requested a long time to trigger the proxy side of the cleanup logic.

The third one may be bigger, see the time is more than 6, is not your visit volume?


session_max_timeout=1800

If there is no OPS in 30 minutes then codis the connection on its own initiative.

Well, the main is that some people feedback that their environment sometimes the client actively shut down the connection but the proxy side of the seizure to close the message, resulting in the end of the proxy side has accumulated a lot of connections to eat the resources full


(4)

NaN GB

Because the memory maxmemory parameter is not set in the Redis configuration file



(5)

All read and write operations in Codis are performed on Redis-master, Redis-slave is responsible for the redundancy of the data and can be switched between master and slave when Master appears down.


(6) ******

In a CODIS cluster, product is used to differentiate whether it is the same cluster. So if it is the same cluster, then the product in dashboard and Codis-proxy is set to the same. Otherwise, the following question is facing

Zk:node does not exist

The proxy_id in the Codis-proxy configuration file is used to distinguish different members under the same cluster, so this parameter is unique.


(7)

Codis-ha is only responsible for automatically selecting a slave upgrade to master when Master hangs up, but does not re-hang the remaining slave on the new master, and does not ensure that the selected slave is optimal


(8)

Too Many open files

This problem occurs when the pressure is more than 4000 when the Redis is under stress testing with Python multithreading.

2 Codis-proxy Support Concurrent 2-3w There is not much problem.


(9)

Dashboard Service Even if stopped will not affect the app through Codis-proxy normal access to the Redis service.

However, if the Codis-ha service is affected, the master will not automatically switch.


This means that if the dashboard service is stopped, the app still has access to Redis, but Codis-ha terminates the run time.


(10)

The master-slave replication of REDIS data can be implemented in the same group, but cannot be implemented in different group.

If all the master and slave in the same group are dead, then the data is lost, but if you query the key in the suspended group, you will be prompted with an error. And that key will be occupied.

All the write operations Codis-proxy will not be sent to the hung group up.


(11)

Under the Codis-server instance in the same group, will multiple slave share the master's read request?

Codis's design philosophy is more consistency, the master-slave synchronization of Redis is not strong consistent, so CODIS does not support read and write separation


(12)

There can only be one dashboard service in a cluster running state, there may be multiple but only one service out of running state.



If you are using Codis's friends, then you will certainly encounter such a problem, that is, about the dashboard login authentication problem. Here I made a user login authentication based on Nginx, which is configured as follows.

650) this.width=650; "src=" Http://s1.51cto.com/wyfs02/M02/78/7A/wKioL1Z9Wt6ibZb6AAAxqvS7_ls831.png "title=" Nginx.png "alt=" Wkiol1z9wt6ibzb6aaaxqvs7_ls831.png "/>

At that time I was doing this login certification, it took 2-3 hours to solve, not because of how complicated, because dashboard many are based on the API to obtain data, if the configuration is less rewrite redirect then only display the page and not get data. Remember


The next Codis article supplements the section:

    1. Each role service in the Codis cluster is highly recommended as a server-type service startup script, which I have already completed, but still needs to be adjusted.

    2. As for the monitoring of the dashboard service, I will also explain this in the next article because it is a Redis master-slave.

    3. If there is no good authentication mechanism, it is recommended to turn off the dashboard service while developing a visual interface that can be viewed but does not have permission to operate.




This article is from the "Zheng" blog, make sure to keep this source http://467754239.blog.51cto.com/4878013/1728423

Codis is a distributed Redis solution

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.