What the Redis cluster scenario should do

Source: Internet
Author: User
Tags failover redis cluster redis server

Programme 1:redis official cluster scheme Redis Cluster

Redis cluster is a server sharding sharding technology.

Redis3.0 version of the formal provision, to solve the multi-redis instances of cooperative service problem, the time is late, can prove that in large-scale production environment, the success of the case is not many, it takes time to test.

In Redis cluster, Sharding uses the concept of slot (slot), which is divided into 16,384 slots. For each key-value pair that enters Redis, it is hashed according to Key and assigned to one of the 16,384 slots. Using the hash algorithm is also relatively simple, CRC16 after 16384 modulo.

Each node in the Redis cluster is responsible for allocating a portion of the 16,384 slots, that is, each slot corresponds to a node responsible for processing. For example, three node cluster, the allocated slots are 0-5460,5461-10922,10923-16383,

M:434e5ee5cf198626e32d71a4aee27bc4058b4e45127.0.0.1:7000Slots:0-5460(5461slots) Masterm:048a0c9631c87e5ecc97a4ce5834d935f2f938b6127.0.0.1:7001Slots:5461-10922(5462slots) masterm:04ae4184b2853afb8122d15b5b2efa471d4ca251127.0.0.1:7002Slots:10923-16383(5461Slots) Master

When a node is added or reduced?

When node nodes are added or reduced dynamically, 16,384 slots need to be reassigned, so the key values in the slots are also migrated. This process, currently in a semi-automatic state, requires manual intervention.

In the event of a node failure

If a node fails, the slots it is responsible for fails, and the entire Redis cluster will not work. Therefore, the official recommendation is to configure node as a master-slave structure, that is, a primary master node, hanging n slave from the node.

This is very similar to the Redis sharding scenario where server nodes are monitored by Sentinel (Sentinel) architecture master-Slave architecture, but the Redis cluster itself provides the ability to fail-over fault tolerance.

Communication port

The new node recognition capabilities, fault detection and failover capabilities of the Redis cluster are communicated through each node and other nodes in the cluster, known as the cluster bus (cluster bus).

The communication port number is typically the node port number plus 10000. If the port number of a node is 6379, then its open communication port is 16379. The communication between nodes uses a special binary protocol.

For the client, the entire cluster is valued as a whole, the client can connect to any node to operate, just like a single redis instance, when the client operation key is not assigned to the node, Redis will return to the steering instruction, pointing to the correct node.

As follows. Three Redis instances were deployed on 127.0.0.1 to form the Redis cluster, with the port number 7000,7001,7002 respectively. Set the <foo,hello>,foo corresponding key value on the Redis node with port number 7000 to redirect to the node with port number 7002. The Get FOO command will also redirect to Node on 7002 to fetch the data and return it to the client.

[Email protected] create-cluster]# redis-cli-c-P7000127.0.0.1:7000>set Foo Hello-Redirected to slot [12182] located at127.0.0.1:7002OK127.0.0.1:7000>get foo-Redirected to slot [12182] located at127.0.0.1:7002"Hello"

Scenario 2:redis sharding Cluster

Redis Sharding is a client-side sharding sharding technology.

Redis sharding can be said to be the most widely used multi-redis instance clustering method in the industry before Redis cluster comes out. The main idea is to hash the key of the Redis data using a hashing algorithm, and the specific key will be mapped to a specific Redis node through the hash function.

This way, the client knows which Redis node to manipulate the data, and it needs to be explained that this is done on the client. Sharding Architecture:

The Java Redis client Jedis, which supports the Redis sharding feature, which is Shardedjedis and Shardedjedispool in conjunction with the cache pool. The Jedis Redis sharding implementation has the following features:

1, using the consistent hashing algorithm (consistent hashing)

The key and node name are hashed at the same time, then mapped, and the algorithm used is murmur_hash. The main reason for consistent hashing is that when nodes are increased or decreased, rehashing caused by a re-match is not generated. The consistency hash only affects the neighboring node key assignment, the influence quantity is small. For more consistent hashing algorithms, refer to: http://blog.csdn.net/cywosp/article/details/23397179/

2. Virtual node

Shardedjedis will hash 160 virtual nodes per Redis node, based on the name virtualization. With the virtual node mapping matching, the key can be moved more evenly across the redis nodes when the Redis node is added or reduced, instead of only the neighboring nodes being affected. , Redis node 1 is virtualized into node1-1 and node1-2, scattered on the Lezenhasi ring. In this way, when Object1, Object2 hash, select the closest node Node1-1 and Node1-2, and node1-1 and Node1-2 are node nodes of the virtual node, that is, actually stored on node nodes.

By adding virtual nodes, you can guarantee the balance, that is, each Redis machine stores nearly as much data as it does, rather than a single machine that stores more data and less.

3, Shardedjedis support Keytagpattern mode

Extract part of the key Keytag do sharding, so that by properly naming key, you can put a set of associated keys into the same Redis node, to avoid cross-node access. That is, the client assigns the key value of the same rule, which is stored on the same Redis node.

When a node is added or reduced?

Redis Sharding uses a client-side sharding approach, and Redis on the server is a relatively separate Redis instance node. At the same time, we do not need to add additional intermediate processing components, which is a very lightweight and flexible Redis multi-instance clustering scheme.

Of course, this lightweight and flexible approach inevitably compromises the other capabilities of the cluster. For example, when you want to increase the Redis node, the different keys are distributed to different redis nodes, although the consistent hash is used.

When we need to expand, add the machine to the Shard list. At this time the client based on the key calculated to fall to the original machine, so if you want to take a value, there will be no access to the situation.

In this case, it is common practice to reload data directly from the backend database, but in some cases, the breakdown of the cache layer directly accesses the database layer, which can cause great pressure on system access.

The Redis author gives a way to--presharding.

is a method of online expansion, the principle is that each physical machine, running a number of different ports of the Redis instance, if three physical machines, each physical machine running three Redis instances, then our shard list actually has 9 Redis instances, when we need to expand, add a physical machine, the steps are as follows:

1. Run Redis-server on a new physical machine

2, the Redis-server from the (slaveof) shard List of a redis-server (assuming called Redisa).

3. After the master-slave Replication (Replication) is complete, change the IP and port of Redisa in the client Shard list to the Redis-server IP and port on the new physical machine.

4. Stop Redisa

This equates to the transfer of a redis-server to a new machine. However, it relies on the copy function of Redis itself, if the main library snapshot data file is too large, the process of replication will be very long, but also put pressure on the main redis, so the process of doing this split is best to choose business access low peak time.

In the event of a node failure

It is not only adding and removing redis nodes that cause key-value loss problems, but the bigger obstacle is the sudden outage of the Redis node.

To not affect Redis performance, try not to turn on the AOF and Rdb file Save function, so you need to architecture Redis Master mode, Master Redis outage, Redis Backup, data is not lost.

Sharding evolved as follows:

In this way, our architecture pattern becomes a Redis node slice that contains a master Redis and a Redis node, and the master is composed of a redis nodes, which ensures high availability of the nodes through automatic failover.

Redis Sentinel Sentinel

It provides the functions of redis monitoring and failover in the primary and standby mode, and achieves high availability of the system.

Read/write separation

At high access time, even with sharding shards, a single node still takes on a lot of access pressure, and we need to further decompose.

Often, reading is often written several times, when we can separate read and write, and read to provide more instances of the number. Using master-slave mode to achieve read and write separation, the master is responsible for writing, from responsible for read-only, while a master hangs multiple from. Under Redis Sentinel Monitoring, automatic monitoring of node failures can also be ensured.

Scenario 3: Using agent middleware to implement large-plan Redis clusters

The above describes the Redis sharding based on the client sharding and the Redis Cluster based on the service-side sharding.

Client sharding Technology

Advantage: Redis instances on the server are independent of each other, are not related to each other, are very easy to scale linearly, and the system flexibility is strong.

Insufficient: 1, because the sharding processing puts to the client, the scale expands the operation dimension to bring the challenge.

2. When the topology of the service-side Redis instance group changes, each client needs to update the adjustment.

3, the connection can not be shared, when the application scale increases, resource waste constraints optimization.

Service-Side Sharding technology

Advantage: When the service-side Redis cluster topology changes, the client does not need to be aware, and the client uses Redis Cluster like a single Redis server, and operations management is also more convenient.

Insufficient: Redis Cluster official release time is not long, system stability, performance, etc. need time to test, especially in large-scale use of the scene.

Can you combine the two advantages? It can not only make each instance of the server independent of each other, support linear scalability, while sharding can be centralized processing, convenient unified management?

Middleware Sharding Slicing Technology

Twemproxy is a middleware sharding sharding Technology, in the middle of the client and server, the client sends the request, after a certain processing (such as sharding), and then forwarded to the backend real Redis server.

In other words, the client does not directly access the Redis server, but is indirectly accessed through the Twemproxy proxy middleware.

The internal processing of the Tweproxy middleware is stateless and originates from Twitter, which not only supports Redis, but also supports memcached.

Using middleware, Twemproxy can reduce the number of connections to the backend server directly by sharing the connection to the backend system. At the same time, it also provides sharding functionality to support the level scaling of backend server clusters. Unified operation and maintenance management has also brought convenience.

Of course, due to the use of middleware, compared to the client-side server, performance will certainly have a loss, about 20% reduction.

Another high-profile implementation is Codis, developed by a team of pea pods. Interested readers can search for relevant information.

Summary: How to choose several options.

The above about three kinds of cluster scheme, mainly according to sharding in which link to differentiate

1, the service side realize the Shard

The official Redis Cluster implementation is in this way, where the client request arrives at the wrong node and is not executed by the wrong node agent, but is redirected to the correct node by the wrong node.

2, the client implements the Shard

The logic of the partition is implemented on the client side, and the client chooses which node to request. Scenarios can refer to a consistent hash, which is generally true for Memcached-based cache clusters, and this scenario typically applies to scenarios where the user has full control over the behavior of the client.

3, middleware implementation of sharding

The famous example is that Twitter's Twemproxy,redis authors rated it highly. An older blog is as follows: Twemproxy, a Redis proxy from Twitter

Another high-profile implementation is Codis, developed by the Pea pod team, the author @go routine Liu has recommended in the previous answer.

So, how do you choose?

Obviously doing sharding on the client is the most custom capable.

The advantage is that the roundtrip time for each request is relatively small, with a good client Shard policy, which allows for a good scalability of the entire cluster, without the need for client-side collaboration and without intermediate layers.

Of course the disadvantage is also obvious, users need to deal with the Redis node downtime, need to adopt a more complex strategy to do replica, and need to ensure that each client sees the cluster "view" is consistent.

The middleware scheme has the lowest requirement for the client implementation, so the client should support the basic Redis communication protocol, and the client should not worry about the mechanism of expansion, multi-replicas, master-slave switching, so this scheme is also suitable for "caching service".

The official launch of the collaboration program also fully supports the Shard and multiple copies, relative to the various proxies, this scenario assumes that the client implementation can be "collaborative" with the server, in fact, the mainstream language SDK has been supported.

So, for most of the use of the scenario, the official program and agency programs are sufficient, in fact, there is no need to be too tangled up, each plan has a lot of reliable companies.

This article mainly refers to the following articles:

https://www.zhihu.com/question/21419897

http://blog.csdn.net/freebird_lb/article/details/7778999

Scholar Kun Kun produced

Reprint please indicate source http://www.cnblogs.com/xckk/p/6134655.html

What the Redis cluster scenario should do

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.