Codis 3.2 Deployment Configuration

Source: Internet
Author: User
Tags unsupported etcd
This is a creation in Article, where the information may have evolved or changed.

Codis 3.2 Deployment Configuration

One, Codis introduction

Codis is a distributed Redis solution, for upper-level applications, there is no significant difference between connecting to Codis Proxy and connecting to native Redis Server (unsupported list of commands), and the upper application can be like using a single Redis

Sample use, Codis the underlying will handle the request forwarding, non-stop data migration, and so on, all the things behind, for the front of the client is transparent, you can simply think behind the connection is a memory unlimited Redis service.

Command List not supported

Https://github.com/CodisLabs/codis/blob/release3.2/doc/unsupported_cmds.md

Modifications to Redis

Https://github.com/CodisLabs/codis/blob/release3.2/doc/redis_change_zh.md

Go installation

Https://golang.org/doc/install

Two, Codis 3.x

The latest release version is codis-3.2,codis-server based on redis-3.2.8

Supports slot synchronization migrations, asynchronous migrations, and concurrent migrations with no restrictions on key size and significantly improved migration performance

Compared to 2.0: reconstruct the whole cluster component communication mode, Codis-proxy and zookeeper realize decoupling, discard codis-config and so on

Metadata storage supports etcd/zookeeper/filesystem and so on, can self-scale support new storage, during the normal operation of the cluster, even if the meta-storage failure will no longer affect the CODIS cluster, greatly improve codis-proxy stability

Codis-proxy has been optimized for performance by controlling GC frequency, reducing object creation, memory pre-allocation, introducing CGO, Jemalloc, and so on, so that its throughput or delay has reached the best Codis project

Proxy implements the Select command, which supports multiple DB

Proxy supports read-write separation, priority reading with ip/and DC down copy function

Automatic switching of main and standby based on Redis-sentinel

Implement dynamic pipeline buffers (reduce memory allocations and the GC issues caused)

Proxy supports real-time access to runtime metrics through HTTP requests for easy monitoring, operation and maintenance

Support for the acquisition of proxy via INFLUXDB and STATSD metrics

The slot auto rebalance algorithm changes from 2.0 based on Max memory policy to the number of slots under Group

Provides a more friendly dashboard and FE interface, a number of new buttons, jump links, error states, etc., to facilitate the rapid detection and processing of cluster failures

New Slotsscan instruction for easy access to all keys in each slot in the cluster

Codis-proxy and Codis-dashbaord Support Docker deployment

Three, Codis 3.x consists of the following components:

Codis Server: Based on redis-3.2.8 branch development. Additional data structures are added to support slot-related operations as well as data migration directives. Specific modifications can be referred to in the documentation for Redis modifications.

Codis Proxy: The Redis Proxy service that the client connects to implements the Redis protocol. Except for some commands not supported (list of unsupported commands), there is no difference between the performance and native Redis (like Twemproxy).

For the same business cluster, multiple Codis-proxy instances can be deployed at the same time;

The status is synchronized between different codis-proxy by Codis-dashboard.

Codis Dashboard: Cluster management tool, support Codis-proxy, codis-server Add, delete, and according to the migration and other operations. When the cluster state changes, Codis-dashboard maintains the status of all Codis-proxy under the cluster.

Consistency.

For the same business cluster, Codis-dashboard can only have 0 or 1 at the same time;

All modifications to the cluster must be done through Codis-dashboard.

Codis admin: Command line tool for cluster management.

Can be used to control codis-proxy, Codis-dashboard status, and access to external storage.

Codis FE: Cluster management interface.

Multiple cluster instance sharing can share the same front-end presentation page;

The configuration file manages the back-end Codis-dashboard list, and the profile is automatically updated.

Storage: Provides external storage for the cluster state.

Provide Namespace concept, different clusters will be organized according to different product name;

Currently, only the Zookeeper, ETCD, Fs Three implementations are available, but the abstract interface can be extended by itself.

Iv. Installation and Deployment

Binary Deployment (official website support centos7,glibc2.14 or later)

Tar zxvf/root/codis3.2.0-go1.7.5-linux.tar.gz-c/usr/local/

Ln-s/usr/local/codis3.2.0-go1.7.5-linux//usr/local/codis

/usr/local/codis/redis-cli-v

/USR/LOCAL/CODIS/REDIS-CLI:/lib64/libc.so.6:version ' glibc_2.14 ' not found (required by/usr/local/codis/redis-cli)

Wrong row

Strings/lib64/libc.so.6 |grep glibc_

glibc_2.2.5

glibc_2.2.6

glibc_2.3

glibc_2.3.2

glibc_2.3.3

glibc_2.3.4

glibc_2.4

glibc_2.5

glibc_2.6

glibc_2.7

glibc_2.8

glibc_2.9

glibc_2.10

glibc_2.11

glibc_2.12

Glibc_private

Need to install glibc_2.14 version

Use CentOS7 directly to resolve this issue.

Source Deployment (CENTOS6/7 all available)

1,java Environment

Yum-y Install java-1.8.0

Java-version

2,go Environment

Tar zxvf/root/go1.8.3.linux-amd64.tar.gz-c/usr/local/

/usr/local/go/bin/go version

Mkdir-p/data/go

Echo ' Export path= $PATH:/usr/local/go/bin:/usr/local/codis/bin ' >>/etc/profile

echo ' Export Gopath=/data/go ' >>/etc/profile

Source/etc/profile

Go env gopath

3,codis Installation

Mkdir-p/data/go/src/github.com/codislabs/

Tar-zxvf/root/codis-3.2.0.tar.gz-c/data/go/src/github.com/codislabs/

cd/data/go/src/github.com/codislabs/

MV Codis-3.2.0/codis

CD codis/

Make

./bin/redis-cli-v

Ln-s/data/go/src/github.com/codislabs/codis//usr/local/codis

Cat Bin/version

Version = Unknown version

compile = 2017-09-11 16:58:26 +0800 by go version go1.8.3 linux/amd64

4,ZOOKEEPR Installation

Tar-zxvf/root/zookeeper-3.4.10.tar.gz-c/usr/local

Ln-s/usr/local/zookeeper-3.4.10/usr/local/zookeeper

Echo '/usr/local/zookeeper/bin/zkserver.sh start ' >>/etc/rc.local

Cat << EOF >>/usr/local/zookeeper/conf/zoo.cfg

ticktime=2000

Initlimit=5

synclimit=2

Datadir=/data/zookeeper/data

clientport=2181

server.1=192.168.188.120:2888:3888

server.2=192.168.188.121:2888:3888

server.3=192.168.188.122:2888:3888

Eof

# # #myid

#注意: 2888 is the master-slave communication port, 3888 is the election port, the server after the myID is in the data directory in the value of the file

Mkdir-p/data/zookeeper/data

echo ' 1 ' >/data/zookeeper/data/myid

/usr/local/zookeeper/bin/zkserver.sh start

/usr/local/zookeeper/bin/zkserver.sh status

5, other nodes directly packaged/data/go/src/github.com/codislabs/codis.tar.gz Copy the past can be

Before the cluster configuration, you need to understand the architecture, cluster shards are divided into three main types:

Client Shard: This needs to be developed on its own, strict requirements for the client, the cluster is difficult to enlarge

Proxy-side sharding: such as CODIS, almost no requirements for the client, the cluster easily expandable

Server-side sharding: such as Redis clusters, where smart clients are required to support cluster protocols, clusters are easily expandable

codis3.2 Cluster architecture

Server: CODIS-FE------codis-dashboard------codis-proxy------codis-group------Codis-server

Clients: Client------NGINX-TCP------Codis-proxy

Cdis-fe can manage multiple Codis-dashboard

Each codis-dashboard represents a product line, and each codis-dashboard can manage multiple codis-proxy

Each codis-proxy can manage multiple Codis-server group

Each Codis-server group consists of at least two codis-server, with a minimum of 1 master 1 prepared

From the above, a large Codis cluster can be divided into multiple product lines, the client connects the various product lines of the Codis-proxy, the line of business can be physically isolated, such as Group1,group2,group3 to Codis-product1 line of business, GROUP4,

Group5,group6 to Codis-product2 line of business, Codis-dashboard configuration is saved in zookeeper.

Special attention

The same codis-server joins multiple Codis-dashboard Codis-group, but the role of the master in different Codis-dashboard is consistent, which represents logical isolation.

The same codis-server is only added to the unique Codis-dashboard Codis-group, which represents physical isolation.

Five, cluster configuration

1, Role Division

192.168.188.120 Codis120codis-server Zookeeper

192.168.188.121 Codis121codis-server Zookeeper

192.168.188.122 Codis122codis-server Zookeeper

192.168.188.123 codis123codis-server codis-proxy nginx-tcp LVs

192.168.188.124 codis124codis-server codis-proxy nginx-tcp LVs

192.168.188.125 codis125 Codis-server Codis-dashboardcodis-fe

Basic directory for the following operations

[root@codis125 codis]# Pwd-l

/usr/local/codis

[root@codis125 codis]# Pwd-p

/data/go/src/github.com/codislabs/codis

2, start Codis-dashobard (operation on codis125)

1), modify the DASHBOARD.TOML configuration file

[Root@codis125 codis]# Cat Config/dashboard.toml

Mainly modify these lines

# Set Coordinator, only accept "zookeeper" & "Etcd" & "FileSystem".

Coordinator_name = "Zookeeper"

COORDINATOR_ADDR = "192.168.188.120:2181,192.168.188.121:2181,192.168.188.122:2181"

# Set Codis Product Name/auth.

Product_Name = "Codis-product1"

Product_auth = ""

# Set bind address for admin (RPC), TCP only.

ADMIN_ADDR = "0.0.0.0:18080"

2), startup script

Need to modify script zookeeper address pool and product name before starting

[root@codis125 Codis] #cat./admin/codis-dashboard-admin.sh

$CODIS _admin_tool_bin-v--remove-lock--product=codis-product1--zookeeper= 192.168.188.120:2181,192.168.188.121:2181,192.168.188.122:2181

[root@codis125 codis]#./admin/codis-dashboard-admin.sh start

3), check the log and port

[Root@codis125 codis]# Cat log/codis-dashboard.log.2017-09-11

2017/09/11 17:42:08 main.go:78: [WARN] Set ncpu = 8

2017/09/11 17:42:08 zkclient.go:23: [INFO] zookeeper-zkclient setup new connection to 192.168.188.120:2181,192.168.188. 121:2181,192.168.188.122:2181

2017/09/11 17:42:08 zkclient.go:23: [INFO] zookeeper-connected to 192.168.188.121:2181

2017/09/11 17:42:08 topom.go:119: [WARN] Create new Topom:

{

"token": "a10e7a35209d1db8f21c8e89a78a6c9a",

"Start_time": "2017-09-11 17:42:08.1058555 +0800 CST",

"Admin_addr": "codis125:18080",

"Product_Name": "Codis-product2",

"pid": 18029,

"pwd": "/usr/local/codis",

"SYS": "Linux codis125 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC x86_64 x86_64 x86_64 gnu/linux"

}

2017/09/11 17:42:08 main.go:103: [WARN] Create topom with config

Coordinator_name = "Zookeeper"

COORDINATOR_ADDR = "192.168.188.120:2181,192.168.188.121:2181,192.168.188.122:2181"

ADMIN_ADDR = "0.0.0.0:18080"

Product_Name = "Codis-product1"

Product_auth = ""

Migration_method = "Semi-async"

Migration_parallel_slots = 100

Migration_async_maxbulks = 200

Migration_async_maxbytes = "32MB"

Migration_async_numkeys = 500

Migration_timeout = "30s"

Sentinel_quorum = 2

Sentinel_parallel_syncs = 1

Sentinel_down_after = "30s"

Sentinel_failover_timeout = "5m"

Sentinel_notification_script = ""

Sentinel_client_reconfig_script = ""

2017/09/11 17:42:08 topom.go:424: [WARN] admin start service on [::]:18080

2017/09/11 17:42:08 main.go:116: [WARN] option--pidfile =/usr/local/codis/bin/codis-dashboard.pid

2017/09/11 17:42:08 zkclient.go:23: [INFO] zookeeper-authenticated:id=170697207944249344, timeout=40000

2017/09/11 17:42:08 zkclient.go:23: [INFO] zookeeper-re-submitting ' 0 ' credentials after reconnect

2017/09/11 17:42:08 main.go:140: [WARN] [0xc42033e120] dashboard is working ...

[root@codis125 codis]# NETSTAT-TULPN |grep Codis-dashboa

TCP6 0 0::: 18080:::* LISTEN 32006/codis-dashboa

4), check the service

Http://192.168.188.125:18080/topom

3, Start Codis-proxy (operation on codis123 and codis124)

1), modify the Codis-proxy startup script

[root@codis124 codis]# cat Admin/codis-proxy-admin.sh|grep DASH

Codis_dashboard_addr= "192.168.188.125:18080"

2), modify the PROXY.TOML configuration

[root@codis124 codis]# cat Config/proxy.toml|grep-ev "^#|^$"

Product_Name = "Codis-product1"

Product_auth = ""

Session_auth = ""

ADMIN_ADDR = "0.0.0.0:11080"

Proto_type = "TCP4"

PROXY_ADDR = "0.0.0.0:19000"

3), start the Codis-proxy script

[root@codis124 codis]#./admin/codis-proxy-admin.sh start

4), check the log and port

[Root@codis124 codis]# Cat log/codis-proxy.log.2017-09-11

[root@codis124 codis]# Netstat-tulpn|grep Codis-proxy

TCP 0 0 0.0.0.0:19000 0.0.0.0:* LISTEN 31971/codis-proxy

TCP6 0 0::: 11080:::* LISTEN 31971/codis-proxy

4, start codis-server (need to operate on all)

1), modify the startup script

vim/usr/local/codis/admin/codis-server-admin-6379.sh start

vim/usr/local/codis/admin/codis-server-admin-6380.sh start

The following points are mainly noted

[root@codis125 codis]# cat/usr/local/codis/admin/codis-server-admin-6379.sh |grep-ev "^#|^$" |grep 6379

Codis_server_pid_file=/data/codis/6379/redis_6379.pid

Codis_server_log_file=/data/codis/6379/redis_6379.log

codis_server_conf_file= $CODIS _conf_dir/redis-6379.conf

2), modify the service configuration

[Root@codis120 codis]# mkdir-p/data/redis/6379

[Root@codis120 codis]# mkdir-p/data/redis/6380

[Root@codis120 codis]# vim/usr/local/codis/config/redis-6379.conf

[Root@codis120 codis]# vim/usr/local/codis/config/redis-6380.conf

The main points are to pay attention to the following

[root@codis120 codis]# cat/usr/local/codis/config/redis-6379.conf |grep-ev "^#|^$" |grep 6379

Port 6379

Pidfile/data/redis/6379/redis_6379.pid

LogFile "/data/redis/6379/redis_6379.log"

dir/data/redis/6379

3), Start Codis-server service

[root@codis120 codis]#./admin/codis-server-admin-6379.sh start

/usr/local/codis/admin/. /config/redis-6379.conf

Starting codis-server ...

[root@codis120 codis]#./admin/codis-server-admin-6380.sh start

/usr/local/codis/admin/. /config/redis-6380.conf

Starting codis-server ...

4), detection log and port

[root@codis120 codis]# NETSTAT-TULPN |grep codis-server

TCP 0 0 192.168.188.120:6379 0.0.0.0:* LISTEN 22231/codis-server

TCP 0 0 192.168.188.120:6380 0.0.0.0:* LISTEN 22308/codis-server

5, Start Codis-fe (operation on codis125)

1), modify the Codis-fe startup script

[Root@codis125 codis]# Cat admin/codis-fe-admin.sh

Mainly modify these lines

#!/usr/bin/env Bash

#COORDINATOR_NAME = "FileSystem"

#COORDINATOR_ADDR = "/tmp/codis"

Coordinator_name= "Zookeeper"

Coordinator_addr= "192.168.188.120:2181,192.168.188.121:2181,192.168.188.122:2181"

2), start the Codis-fe script

[root@codis125 codis]#./admin/codis-fe-admin.sh start

3), check the log and port

[Root@codis125 codis]# Cat log/codis-fe.log.2017-09-11

2017/09/11 19:24:32 main.go:101: [WARN] Set ncpu = 8

2017/09/11 19:24:32 main.go:104: [WARN] Set listen = 0.0.0.0:9090

2017/09/11 19:24:32 main.go:120: [WARN] Set assets =/usr/local/codis/bin/assets

2017/09/11 19:24:32 main.go:155: [WARN] Set--zookeeper = 192.168.188.120:2181,192.168.188.121:2181,192.168.188.122:2181

2017/09/11 19:24:32 zkclient.go:23: [INFO] zookeeper-zkclient setup new connection to 192.168.188.120:2181,192.168.188. 121:2181,192.168.188.122:2181

2017/09/11 19:24:32 main.go:209: [WARN] option--pidfile =/usr/local/codis/bin/codis-fe.pid

2017/09/11 19:24:32 zkclient.go:23: [INFO] zookeeper-connected to 192.168.188.120:2181

2017/09/11 19:24:32 zkclient.go:23: [INFO] zookeeper-authenticated:id=98639613905403907, timeout=40000

2017/09/11 19:24:32 zkclient.go:23: [INFO] zookeeper-re-submitting ' 0 ' credentials after reconnect

[root@codis125 codis]# netstat-tupnl |grep codis-fe

TCP6 0 0::: 9090:::* LISTEN 32141/codis-fe

4), Access panel

http://192.168.188.125:9090/#codis-product1

Six, codis-fe panel operation

1, add group via Codis-fe

Access the cluster Administration page via a Web browser (Fe address: http://192.168.188.125:9090/#codis-product1) Select the cluster Codis-product1 we just built, and in the Proxy bar you can see what we've started Proxy, but

The Group column is empty because the codis-server we started is not added to the cluster add NEW group,new Group Line Input 1, then click NEW Group to add Codis Server,add Server Line Input We just started

Add the Codis-server address to the Group we just created, and then click the Add Server button.

As above, add 6 group,12 codis-server, the default is the first one added in each group, the second one is added from, the same node 2 instances cannot be set to the same group.

2, initialize Solt via Codis-fe

The new cluster slot state is offline, so we need to initialize it (assigning 1024 slots to each group), and the quickest way to initialize it is through the FE-provided rebalance all slots button, as

, click this button to quickly complete a cluster build.

Automatic allocation of 1024 Solt to 6 group,reblance all solts is automatically allocated so Solt to 6 group.

Seven, Agent ha

1, installing LVS and nginx-tcp on codis123 and codis124

2, CONFIGURED vip+19000 port for the first codis-dashboard line of business use, and other analogy

192.168.188.131:19000

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.