Redis Common cluster Scheme (RPM)

Source: Internet
Author: User
Tags failover memcached redis redis cluster redis server

Some time ago interviewed Alibaba, the interviewer asked me in addition to Redis 3.0 development of the official Redis cluster mode (http://www.redis.cn/topics/cluster-tutorial.html), What other Redis cluster scenarios do you know?

After the interview, I inquired about the relevant data and recorded a variety of Redis common cluster schemes. 1.Redis official cluster scheme Redis Cluster

Redis cluster is a server sharding technology that is officially available in version 3.0.
In Redis cluster, sharding adopts the concept of slot (trough), which is divided into 16,384 slots. For each key-value pair that enters the Redis, hashes are allocated to one of these 16,384 slot according to the key. Using the hash algorithm is also relatively simple, that is, after the CRC16 16384 modulo. Each node (nodes) in the Redis cluster is responsible for allocating a portion of these 16,384 slot, that is, each slot corresponds to a node for processing. The command for the 16,384 slot to complete multiple Redis sharing is as follows:

./redis-trib.rb Create--replicas 1 127.0.0.1:7000 127.0.0.1:7001 \
127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005

Of these, the 127.0.0.1 hosts run 6 Redis services from 7000~7005. Option –replicas 1 means that we want to create one from node for each master node in the cluster. The final result is 3 Redis as a host of 16,384 slot, while the remaining 3 Redis as a standby.
When you dynamically add or reduce node nodes, you need to redistribute 16,384 slots, and the key values in the slots are migrated. Of course, this process, in the current implementation, still in semi-automatic state, need human intervention. The command for adding a master node again under the cluster and dividing the slot slot is as follows:
Start a Redis instance with Port 7006

./redis-trib.rb Add-node 127.0.0.1:7006 127.0.0.1:7000

If you are adding a from node to the cluster, you need to add a parameter –slave.

./redis-trib.rb Add-node--slave 127.0.0.1:7006 127.0.0.1:7000

This does not specify the primary node of the node that is added, in which case the system will randomly select a master node as the node in the other replica set.

You only need to specify the address of one of the nodes in the cluster, and Redis-trib will automatically find the other nodes in the cluster.

There is no slot for the newly added Redis implementation, which needs to be run to be reclassified.

./redis-trib.rb Reshard 127.0.0.1:7000

Redis cluster, to ensure that 16,384 slots corresponding node is working properly, if a node fails, it is responsible for the slots will be ineffective, the entire cluster will not work. To increase the accessibility of the cluster, the official recommended scheme is to configure node as a master-slave structure, a master master node, and n slave from the node. At this point, if the main node is invalidated, Redis cluster will select an ascending main node from the slave node according to the election algorithm, and the whole cluster shall continue to provide services externally. Redis Cluster's new node recognition capability, fault judgment and failover capability are communicated with other nodes through each node in the cluster, which is called the Cluster Bus (cluster). They use a special port number, that is, the external service port number plus 10000. For example, if a node has a port number of 6379, the port number that it communicates with other nodes is 16379. A special binary protocol is used for communication between nodes. For the client, the entire cluster is considered a whole, and the client can connect to any node to operate, as in the case of a single Redis instance, when the key of the client operation is not assigned to the node, REDIS returns the steering instruction, pointing to the correct node, It's kind of like a 302 redirect jump on a browser page. At present, the model can prove that there are not many successful cases in the large-scale production environment. 2.Redis sharding Cluster

Redis 3 has officially launched the official cluster technology, which solves the problem of multiple Redis instance cooperative service. Redis cluster can be said to be the service end of sharding fragmentation technology, the key value according to a certain algorithm allocated to each instance of fragmentation, at the same time each instance node coordination and communication, common external commitment to the same service.
Multi-redis Instance service is much more complicated than single Redis instance, which involves localization, coordination, fault-tolerant and capacity-enlarging and other technical problems. A lightweight client Redis sharding technology is introduced here. Redis sharding can be said to be Redis cluster out before the industry commonly used multiple Redis instance clustering method. The main idea is to hash the key of the Redis data by hashing, and through the hash function, the specific key will be mapped to the specific Redis node. This way, the client knows which Redis node to manipulate the data to.

Thankfully, the Java Redis Client Jedis has supported the Redis sharding function, that is, Shardedjedis and shardedjedispool with the cache pool. The Redis sharding implementation of Jedis has the following characteristics: Using a consistent hash algorithm (consistent hashing), hashing the key and node name at the same time, and then map the match. The main reason for using a consistent hash is that when the node is increased or reduced, the rehashing caused by the mismatch is not generated. A consistent hash affects only the adjacent node key allocation and has a small impact. To avoid a consistent hash that only affects the node-allocation pressure of adjacent nodes, Shardedjedis will virtualize 160 virtual nodes for each Redis node based on the name (No, Jedis will give the default name). Depending on the weight weight, you can also virtualize 160 times times the number of virtual nodes. With the virtual node to do mapping matching, when increasing or reducing the Redis node, the key in each Redis node move more evenly, rather than only the adjacent nodes are affected. Shardedjedis supports Keytagpattern mode, that is, to extract a part of the key Keytag do sharding, so that by reasonably naming key, you can put a set of associated keys into the same Redis node, which is important to avoid cross node access to relevant data.

Redis sharding Adopt the Client Sharding method, server Redis or a relatively independent redis instance node, did not make any changes. At the same time, we do not need to add additional intermediate processing components, this is a very lightweight, flexible Redis multi-instance clustering approach.

Of course, Redis sharding This lightweight and flexible approach is bound to compromise on the other capabilities of the cluster. such as expansion, when you want to increase the Redis node, although the use of a consistent hash, after all, there will be a key matching is not lost, then need key value migration. As a lightweight client sharding, handling Redis key-value migrations is unrealistic, requiring that the application level allow data loss in Redis or reload data from the back-end database. But sometimes, the breakdown of the cache layer, direct access to the database layer, the system access will cause great pressure. There are other ways to improve the situation. Redis authors give a more flattering approach –presharding, that is, based on the size of the system to deploy as many Redis instances as possible, these instances occupy very small system resources, a physical machine can be deployed multiple, so that they are involved in the sharding, when the need for expansion, Select an instance as the primary node and the newly joined Redis node to replicate data from the node. After data synchronization, modify the sharding configuration, so that point to the original instance of the Shard point to the new machine after the expansion of the Redis node, while adjusting the new Redis node as the primary node, the original instance can no longer be used. Presharding is to allocate enough fragments in advance to replace the original Redis instance belonging to a fragment for the new Redis instance with larger capacity. Participation in the sharding is not changed, so there is no shift of key values from one area to another, but only the key values belonging to the same fragment region are synchronized from the original Redis instance to the new Redis instance.

It's not just additions and deletions. Redis nodes cause the loss of key values, and the greater obstacle comes from the sudden downtime of redis nodes. In the article "Redis persistence" has been mentioned, in order not to affect Redis performance, as far as possible not to open the AOF and Rdb file Save function, can be the framework of Redis main standby mode, the main redis downtime, data will not be lost, standby Redis left backup. In this way, our schema pattern becomes a Redis node slice that contains a master Redis and a standby redis. In the main redis downtime, standby redis take over, the rise of the main Redis, continue to provide services. The main standby is composed of a Redis node, which ensures the high availability of nodes through automatic failover.

With high traffic, even with sharding fragmentation, a single node has a lot of access pressure, and we need to decompose further. In general, the application access Redis read operations and write operations vary greatly, reading is often written several times, when we can separate read and write, and read to provide more number of instances.

You can use master-slave mode to achieve read-write separation, the master is responsible for writing, from responsible for read-only, while a main hanging multiple from. Under the Sentinel monitoring (Redis Sentinel provides the Redis monitoring, the failover function achieves the high availability of the system in the main standby mode), also can guarantee the node fault automatic monitoring. using agent middleware to realize large-scale Redis cluster

There are two ways to Redis server clusters, which are based on the client sharding Redis sharding and sharding Redis based on the service-side Cluster.
The advantage of the client sharding technology is that the Redis instances of the service end are independent of each other, and each Redis instance runs like a single server, it is easy to linearly expand and the system is very flexible. Its shortcoming lies in: Because the sharding processing puts to the client, the scale progress expands when the movement dimension brings the challenge. When the server-side Redis instance cluster topology changes, each client needs to update the adjustment. The connection cannot be shared, when the application scale increases, the resource waste restricts optimization.

Service-side sharding Redis cluster Its advantage lies in the server-side Redis cluster topology changes, the client does not need to perceive, the client uses the Redis cluster like the single Redis server, the Operation dimension management is also more convenient.

However, Redis cluster official version of the introduction of the time is not long, system stability, performance and so will require time testing, especially in large-scale use of occasions.

Can combine both advantages. That is, the server can make each instance independent, support linear scalability, while sharding can be centralized processing, convenient unified management. This article describes the Redis agent middleware Twemproxy is such a use of middleware to do sharding technology.

Twemproxy in the middle of the client and server, the client sent the request, the processing (such as sharding), and then forwarded to the back end of the real Redis server. That is, the client does not directly access the Redis server, but is indirectly accessed via Twitter's Open-source twemproxy proxy middleware (http://blog.nosqlfan.com/html/4147.html).

The internal processing of twemproxy middleware is stateless, and it can be easily clustered in itself, which avoids single point of pressure or failure.

Twemproxy also called Nutcracker, originated in the Twitter system redis/memcached cluster development practice, the operation of good results, after the code dedicated to the open source community. Its lightweight and efficient, the use of C language development, engineering Web site is: github-twitter/twemproxy:a fast, Light-weight proxy for memcached and Redis

The Twemproxy backend supports not only Redis but also memcached, a specific environment for the Twitter system.

Because of the use of middleware, twemproxy can reduce the number of connections that clients directly connect to back-end servers by sharing connections to back-end systems. At the same time, it also provides sharding functionality to support the level of backend server cluster expansion. Unified operation and maintenance management has also brought convenience.

Of course, but also because of the use of middleware agent, compared to the client Direct server mode, performance will be lost, measured results of about 20% reduction.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.