Twemproxy Proxy Key-value Database ssdb implement data distributed storage

Source: Internet
Author: User
Tags automake

Ssdb is a high-performance NoSQL database that supports rich data structures, and data that is used to replace Redis or store 1 billion-level lists with Redis are now being used by many well-known enterprises. We're using SSDB to move the key from the existing Redis to Ssdb to break the existing Redis 120G storage limit (of course it can be extended, but the next cost is calculated to give up).

Twemproxy is a redis and Memcache proxy server for Twtter open source, where we use the Twemproxy proxy to proxy the SSDB cluster for distributed storage of data, shared.

1. Node Planning:

Twemproxy 192.168.0.100

SSDB1 Master 192.168.0.101

SSDB1 slave 192.168.0.102

SSDB2 Master 192.168.0.103

SSDB2 slave 192.168.0.104

2, Installation Ssdb

##  unzip the downloaded SSDB package $ unzip  ssdb-master.zip##  install GCC, gcc-c++ , make and other tools # #编译, install SSDB, will be installed under/usr/local/ssdb/$ cd ssdb-master$ make &&make install##  into the installation directory $  cd/usr/local/ssdb/##  Start Ssdb-server$ ./ssdb-server -dssdb.confssdb 1.8.2copyright   (c) 2012-2014 ssdb.io##  Verify the success of the boot, if the 8888 port is monitored, it indicates success $ netstat-alnut|grep 8888tcp        0      0 127.0.0.1:8888               0.0.0.0:*                    LISTEN##  Client Connection $  ./ssdb-cli -p8888 ssdb  (CLI)  - ssdb command line tool. copyright  (c) 2012-2014 ssdb.io ' h '  or  ' help '  forhelp,  ' Q '  to  Quit.server version:1.8.2ssdb 127.0.0.1:8888> 

    3, master and slave configuration, take ssdb1 as an example

##  Modify the Ssdb1 master configuration file as follows # ssdb-server config# must indent by tab!  # relative to path of this file, directorymust existswork_dir  = ./varpidfile = ./var/ssdb.pid server:       ip : 192.168.0.101       port:8888       #  bind to public ip        #ip: 0.0.0.0        # format: allow|deny: all|ip_prefix        # multiple allows or denys is supported         #deny: all        #allow: 127.0.0.1         #allow: 192.168       # auth  Password must be at least 32 characters        #auth:  very-strong-password replication:       binlog: yes        # Limit sync speed to *MB/s, -1: no  limit       sync_speed: -1        slaveof:                #  to identify a master even ifit moved (ip, port changed)                  # if set  to empty or notdefined, ip:port will be used.                  #id: svc_2                 # sync|mirror, default is sync                  #type: sync                  #ip: 127.0.0.1                  #port:  8889 logger:        level: debug       output:  log.txt       rotate:                 size: 1000000000 leveldb:        # in MB       cache_size: 500        # in kb       block_size: 32        # in mb       write_buffer_size: 64        # in mb       compaction_ speed: 1000       # yes|no        compression: yes##  Modify the Ssdb1 slave configuration file as follows # ssdb-server config# must  indent by tab! # relative to path of this file,  directorymust existswork_dir = ./varpidfile = ./var/ssdb.pid server:         ip: 192.168.0.102        port : 8888       # bind to public ip         #ip:  0.0.0.0       # format: allow| deny: all|ip_prefix       # multiple allows or denys is supported         #deny: all        #allow:  127.0.0.1        #allow: 192.168        # auth password must be at least 32 characters         #auth: very-strong-password replication:        binlog:yes       # limit sync speed to *mb/s , -1: no limit       sync_speed: -1        slaveof:                 # to identify a master even if itmoved (Ip, port  changed)   &NBSp;             # if set to  empty or notdefined, ip:port will be used.                 id: svc_1                 # sync|mirror, default is  sync                type :sync                ip:  192.168.0.101                 port:8888 logger:       level: debug        output: log.txt       rotate:                 size: 1000000000 leveldb:        # in MB       cache_size: 500        # in kb       block_size: 32         # in MB        write_buffer_size: 64       # in mb        compaction_speed: 1000       # yes|no        compression: yes


This completes the SSDB1 master and ssdb1 slave,with the same configuration for SSDB2 master and ssdb2 slave . It is important to note that if you want to configure slave for SSDB nodes that already have data, unlike MySQL, there is no need to copy the underlying data to slave , just specify the master information in the slave configuration file. The ssdb will automatically copy and synchronize the underlying data.


All the data in the SSDB database is in order, so you can understand the whole database as a list, SSDB start Copy from the table header, one node at a time, the cursor is always back. At this point, if there is a new Binlog, SSDB will first determine the Binlog corresponding node in the list where the position, is in front of the cursor or behind? If you are in front of the cursor, the Binlog will be sent to Slave execution. If behind the cursor, it is ignored, because the cursor will eventually move to the updated location. As you can see from this description, the Slave in the copy phase may not be able to immediately know the update on Master. When the cursor moves to the end of the list, the Copy process ends, and the master-slave synchronization process enters the sync phase, which is the immediate (millisecond) update phase. More Ssdb learnning can access Ssdb's project address: http://ssdb.io/zh_cn/.


4, Installation Twemproxy

# # Install Automake, Libtool, XZ tools $ yum Install automake libtool xz-y## installation autoconf$ wget http://down1.chinaunix.net/distfiles /autoconf-2.69.tar.xz$ xz-d autoconf-2.69.tar.xz$ Tar xf autoconf-2.69.tar-c/opt$ cd/opt/autoconf-2.69$./configur e$ make && make install## install twemproxy$ git clone https://github.com/twitter/twemproxy.git$ cd twemproxy/$ AUT Oreconf-fvi $./configure$ make && do install## place the Twemproxy configuration file under/etc/, mkdir/etc/nutcracker$ CP conf/ Nutcracker.yml/etc/nutcracker/nutcracker.yml

5, Configuration Twemproxy

# # Modify config file $ vim/etc/nutcracker/nutcracker.yml beta:listen:127.0.0.1:22122 hash:fnv1a_64 Hash_tag: ' {} ' distribut Ion:ketama auto_eject_hosts:false timeout:400 redis:true servers:-192.168.0.101:8888:1 server1-192.168.0.1 03:8888:1 Server2

Configuration items:

listen:127.0.0.1:22122 # # Listening address and port

HASH:FNV1A_64 # # hash algorithm

Redis:true # # Whether the back-end agent is Redis

Servers: # # Shard Server List

Distribution:ketama # # Shard algorithm, there are ketama (consistent hash), Modula (modulo), random (random) three kinds of algorithms

Auto_eject_hosts:false # # is automatically removed from the server list when the node fails to respond, and is automatically added to the server list when it responds again

Hash_tag: "{}" # # Suppose an object in Ssdb has more than one key property, such as Kora the person's name Name:kora:,kora the age Age:kora:,kora address Loc:kora:, in order to ensure that this kora these three attributes Can be routed to the same shard on the backend, we need to specify Hash_tag: "{::}", so as to avoid cross-shard retrieval when reading data


6. Start Twemproxy

# # Start $ nutcracker-d-c/etc/nutcracker/nutcracker.yml-p/var/run/redisproxy.pid-o/var/log/redisproxy.log& # # Verification 22 122 port is listening $ netstat-alnut |grep 22122tcp 0 0 127.0.0.1:22122 0.0.0.0:* LISTEN

7. Test the Shard function

# # need to note: Twemproxy agent is not support SSDB-CLI connection, but SSDB protocol and Redis protocol consistent, we can use Redis client to connect the proxy, this question I consulted Ssdb author, is need to use REDIS-CLI of $. Redis-cli-p 22122127.0.0.1:22122>

# # Agent Insert the following data

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/6B/A4/wKiom1UzckKBDtqHAAFqK4wp_4c881.jpg "title=" a1.png "alt=" Wkiom1uzckkbdtqhaafqk4wp_4c881.jpg "/>

    ## back end ssdb Server1 view data on, 3 keys

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/6B/A0/wKioL1Uzc5uBG0TxAAJYOp_awRA061.jpg "title=" a2.png "alt=" Wkiol1uzc5ubg0txaajyop_awra061.jpg "/>

    ## back end ssdb Server1 view data on, 5 keys

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/6B/A4/wKiom1UzckLy_pgTAAJALRpADoU610.jpg "title=" a3.png "alt=" Wkiom1uzckly_pgtaajalrpadou610.jpg "/>

Twemproxy agents can implement SSDB distributed storage, but for the SSDB distributed system of online expansion is powerless, if the need to extend the existing distributed cluster, then the simplest way is to create a new desired cluster, and then apply the double write data to the old two clusters, Then the old cluster of data through the agent to migrate to the new cluster, it is important to note that only when the new group does not have a key when set, to prevent the old data overwrite the data, after the data migration is completed, choose a suitable time to switch the application to the new cluster, due to the use of Twemproxy agent, The switching process is transparent to the front-end application.


This article from "Brave forward, resolutely left" blog, reprint please contact the author!

Twemproxy Proxy Key-value Database ssdb implement data distributed storage

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.