Challenge kafka! Redis5.0 heavyweight features stream early adopters

Source: Internet
Author: User
Tags ack unique id redis server

Introduction: Redis5.0 's latest focus is on stream support, giving many architects a new choice in Message Queuing, especially if Redis fans are an absolute boon. So what are the special features of the Redis stream? What are the similarities and differences with Kafka? How to use it better? The author of the old money on this research a lot, small reading after the benefit is very large, we may wish to learn more about the next.

Author Profile: Chan Wenping (Old money), the Internet, distributed high-concurrency technology for ten years veteran, is currently the palm of the technology senior back-end engineers. Proficient in the use of Java, Python, Golang and other computer languages, developed the game, made a website, wrote a message push system and MySQL middleware, implemented open source ORM framework, WEB Framework, RPC framework, etc.

Redis5.0 has recently been released by the author, adding a lot of new features. And Redis5.0 's biggest new feature is a data structure stream, which is a new powerful support multicast durable message queue, the author admits that Redis stream mercilessly borrowed from the Kafka design.

As shown in the structure of the Redis stream, it has a message list that strings all the added messages, each with a unique ID and corresponding content. The message is persistent, and after Redis restarts, the content is still there.

Each stream has a unique name, which is the Redis key, which is created automatically the first time we use the xadd instruction to append messages.

Each stream can hang multiple consumer groups, and each consumer group will have a cursor last_delivered_id over the stream array to indicate which message the current consumer group has consumed. Each consumer group has a unique name within the stream, and the consumer group is not created automatically, it requires a separate instruction Xgroup create to be created, and it needs to specify a message ID from the stream to start spending, an ID used to initialize the last_delivered_id variable.

The state of each consumer group (Consumer Group) is independent and unaffected by each other. This means that the message inside the same stream will be consumed by each consumer group.

The same consumer group (Consumer Group) can hook up multiple consumers (Consumer), which is a competitive relationship, and any consumer reading the message causes the cursor last_delivered_id to move forward. Each consumer has a unique name within a group.

Within the consumer (Consumer), there is a state variable pending_ids that records the message that is currently being read by the client, but there is no ACK. If the client does not have an ACK, the message ID in this variable will be more and more, and once a message is ACK, it will begin to decrease. This pending_ids variable in Redis is officially called PEL, the pending Entries List, which is a very central data structure used to ensure that clients consume at least one message at a time, without losing processing in the middle of a network transmission.

Message ID

The message ID is in the form of timestampinmillis-sequence, for example, 1527846880572-5, which indicates that the current message is generated at a millimeter timestamp of 1527846880572 and is the 5th message generated within that millisecond. The message ID can be automatically generated by the server, or it can be specified by the client itself, but the form must be an integer-integer, and must be the ID of the message to be added later than the previous message ID.

Message content

The message content is the key value pair, the shape is like the hash structure key value pair, this is nothing special.

Change and delete

Xadd Append Message

Xdel Delete message, here the deletion is only set the flag bit, does not affect the total message length

Xrange gets a list of messages that will automatically filter the messages that have been deleted

Xlen message length

del Delete stream

# * indicates that the server automatically generates IDs, followed by a heap of key/value

127.0.0.1:6379>xadd codehole * Name Laoqian age30# named Laoqian, age 30 years

1527849609889-0# generated Message ID

127.0.0.1:6379>xadd codehole * Name Xiaoyu age29

1527849629172-0

127.0.0.1:6379>xadd codehole * Name Xiaoqian age1

1527849637634-0

127.0.0.1:6379>xlen Codehole

(integer) 3

127.0.0.1:6379>xrange Codehole-+#-Represents the minimum value, + represents the maximum value

127.0.0.1:6379>xrange Codehole-+

1) 1) 1527849609889-0

2) 1) "Name"

2) "Laoqian"

3) "Age"

4) "30"

2) 1) 1527849629172-0

2) 1) "Name"

2) "Xiaoyu"

3) "Age"

4) "29"

3) 1) 1527849637634-0

2) 1) "Name"

2) "Xiaoqian"

3) "Age"

4) "1"

127.0.0.1:6379>xrange codehole1527849629172-0+# Specify a list of minimum message IDs

1) 1) 1527849629172-0

2) 1) "Name"

2) "Xiaoyu"

3) "Age"

4) "29"

2) 1) 1527849637634-0

2) 1) "Name"

2) "Xiaoqian"

3) "Age"

4) "1"

127.0.0.1:6379>xrange codehole-1527849629172-0# Specify a list of maximum message IDs

1) 1) 1527849609889-0

2) 1) "Name"

2) "Laoqian"

3) "Age"

4) "30"

2) 1) 1527849629172-0

2) 1) "Name"

2) "Xiaoyu"

3) "Age"

4) "29"

127.0.0.1:6379>xdel codehole1527849609889-0

(integer) 1

127.0.0.1:6379>xlen codehole# Length not affected

(integer) 3

127.0.0.1:6379>xrange Codehole-+# The deleted message is gone.

1) 1) 1527849629172-0

2) 1) "Name"

2) "Xiaoyu"

3) "Age"

4) "29"

2) 1) 1527849637634-0

2) 1) "Name"

2) "Xiaoqian"

3) "Age"

4) "1"

127.0.0.1:6379>del codehole# Delete entire stream

(integer) 1

Independent consumption

We can make separate consumption of stream messages without defining a consumer group, and even block waits when the stream has no new messages. Redis has designed a separate consumption instruction xread, which can be used as a regular message queue (list). When using Xread, we can completely ignore the existence of the consumer group (Consumer Group), just like the stream is a regular list.

# read two messages from the stream header

127.0.0.1:6379>xread Count2streams codehole0-0

1) 1) "Codehole"

2) 1) 1) 1527851486781-0

2) 1) "Name"

2) "Laoqian"

3) "Age"

4) "30"

2) 1) 1527851493405-0

2) 1) "Name"

2) "Yurui"

3) "Age"

4) "29"

# read a message from the end of the stream, no doubt no message is returned here

127.0.0.1:6379>xread Count1streams Codehole $

(nil)

# Waiting for a new message from the tail block, the following instructions will be blocked until the new message arrives

127.0.0.1:6379>xread Block0count1streams Codehole $

# We'll open a new window and plug a message into the stream in this window

127.0.0.1:6379>xadd codehole * Name youming age60

1527852774092-0

# then switch to the front window and we can see that the blocking is lifted and the new message content is returned

# and also shows a wait time, here we wait for 93s

127.0.0.1:6379>xread Block0count1streams Codehole $

1) 1) "Codehole"

2) 1) 1) 1527852774092-0

2) 1) "Name"

2) "Youming"

3) "Age"

4) "60"

(93.11s)

If the client wants to use Xread for sequential consumption, be sure to remember where the current consumption is, that is, the return message ID. The next time you continue calling Xread, you can continue to consume subsequent messages by passing in the last message ID that was last returned as a parameter.

Block 0 means blocking forever until the message arrives, block 1000 is blocking 1s, and if no message arrives within 1s, nil is returned

127.0.0.1:6379>xread Block1000count1streams Codehole $

(nil)

(1.07s)


Create a consumer group

Stream creates a consumer group (Consumer Group) through the xgroup Create directive, which needs to pass the START message ID parameter to initialize the LAST_DELIVERED_ID variable.

127.0.0.1:6379>xgroup Create Codehole cg10-0# means spending from the beginning

Ok

# $ means consumption starts from the tail, only new messages are accepted, and the current stream message is all ignored

127.0.0.1:6379>xgroup Create Codehole CG2 $

Ok

127.0.0.1:6379>xinfo codehole# Get stream information

1) length

2) (integer) 3# A total of 3 messages

3) Radix-tree-keys

4) (integer) 1

5) Radix-tree-nodes

6) (integer) 2

7) groups

8) (integer) * Two consumer groups

9) first-entry# First message

10) 1) 1527851486781-0

2) 1) "Name"

2) "Laoqian"

3) "Age"

4) "30"

One) last-entry# last message

12) 1) 1527851498956-0

2) 1) "Name"

2) "Xiaoqian"

3) "Age"

4) "1"

127.0.0.1:6379>xinfo groups codehole# Get consumer group information for stream

1) 1) name

2) "CG1"

3) Consumers

4) (integer) 0# The consumer group has no consumer

5) Pending

6) (integer) 0# no messages being processed by the consumer group

2) 1) name

2) "CG2"

3) consumers# The consumer group has no consumer

4) (integer) 0

5) Pending

6) (integer) 0# no messages being processed by the consumer group

Consumption

Stream provides a xreadgroup instruction for group consumption within a consumer group, requiring a consumer group name, a consumer name, and a start message ID. Like Xread, it can also block waiting for new messages. After the new message is read, the corresponding message ID is entered in the consumer's PEL (processing message) structure, the client is processed and the server is notified using the xack instruction, and the message ID is removed from PEL.

# > indicates to start reading from the last_delivered_id behind the current consumer group

# every time a consumer reads a message, the last_delivered_id variable goes forward.

127.0.0.1:6379>xreadgroup GROUP cg1 C1 count1streams codehole >

1) 1) "Codehole"

2) 1) 1) 1527851486781-0

2) 1) "Name"

2) "Laoqian"

3) "Age"

4) "30"

127.0.0.1:6379>xreadgroup GROUP cg1 C1 count1streams codehole >

1) 1) "Codehole"

2) 1) 1) 1527851493405-0

2) 1) "Name"

2) "Yurui"

3) "Age"

4) "29"

127.0.0.1:6379>xreadgroup GROUP cg1 C1 count2streams codehole >

1) 1) "Codehole"

2) 1) 1) 1527851498956-0

2) 1) "Name"

2) "Xiaoqian"

3) "Age"

4) "1"

2) 1) 1527852774092-0

2) 1) "Name"

2) "Youming"

3) "Age"

4) "60"

# and then read on, there's no new news #

127.0.0.1:6379>xreadgroup GROUP cg1 C1 count1streams codehole >

(nil)

# then block the waiting.

127.0.0.1:6379>xreadgroup GROUP cg1 C1 block0count1streams codehole >

# Open another window and plug in the message

127.0.0.1:6379>xadd codehole * Name lanying age61

1527854062442-0

# go back to the previous window and find that the blocking is lifted and a new message is received

127.0.0.1:6379>xreadgroup GROUP cg1 C1 block0count1streams codehole >

1) 1) "Codehole"

2) 1) 1) 1527854062442-0

2) 1) "Name"

2) "Lanying"

3) "Age"

4) "61"

(36.54s)

127.0.0.1:6379>xinfo Groups codehole# Observation consumer group information

1) 1) name

2) "CG1"

3) Consumers

4) (integer) # a consumer

5) Pending

6) (integer) * * Total 5 messages being processed there's no ACK.

2) 1) name

2) "CG2"

3) Consumers

4) (integer) 0# consumer group CG2 No change, because we've been manipulating cg1 in the front.

5) Pending

6) (integer) 0

# If there are multiple consumers in the same consumer group, we can observe the status of each consumer through the Xinfo consumers command

127.0.0.1:6379>xinfo Consumers Codehole cg1# currently has 1 consumers

1) 1) name

2) "C1"

3) Pending

4) (integer) * * A total of 5 messages to be processed

5) Idle

6) (integer) 418715# How long has it been idle for MS not to read the message?

# Next we'll ack a message

127.0.0.1:6379>xack Codehole cg11527851486781-0

(integer) 1

127.0.0.1:6379>xinfo Consumers Codehole CG1

1) 1) name

2) "C1"

3) Pending

4) (integer) 4# into 5

5) Idle

6) (integer) 668504

# ACK All messages below

127.0.0.1:6379>xack Codehole cg11527851493405-01527851498956-01527852774092-01527854062442-0

(integer) 4

127.0.0.1:6379>xinfo Consumers Codehole CG1

1) 1) name

2) "C1"

3) Pending

4) (integer) 0# PEL Empty

5) Idle

6) (integer) 745505

Stream message too many what to do

It is easy for the reader to think that if the message accumulates too much, the stream's list is not very long and the content will explode is a problem. The xdel instruction does not delete the message, it just makes a flag bit for the message.

Redis naturally takes this into account, so it provides a fixed-length stream feature. The xadd instruction provides a fixed length of maxlen, which can be used to kill the old message, ensuring that the specified length is not exceeded.

127.0.0.1:6379>xlencodehole

(integer) 5

127.0.0.1:6379>xaddcodeholemaxlen3 *namexiaoruiage1

1527855160273-0

127.0.0.1:6379>xlencodehole

(integer) 3

We saw that the length of the stream was cut off.

What happens if you forget the ACK?

Stream saves the list of message IDs in process in each consumer structure PEL, if the consumer receives the message processing is finished but does not reply ACK, will cause the PEL list to grow, if have many consumer groups, then this PEL occupies the memory to enlarge.

How PEL avoids message loss

When the client consumer reads the stream message, the Redis server replies the message to the client, and the client suddenly disconnects and the message is lost. But PEL has saved the message ID that was sent out. After the client is re-connected, you can receive a list of message IDs in PEL again. However, the Xreadgroup start message ID cannot be a parameter at this time, but must be any valid message ID, typically set to 0-0, indicating that all PEL messages are read and new messages since last_delivered_id.

Conclusion

Stream's consumption model draws on the concept of Kafka's consumer grouping, which makes up for the defect that Redis pub/sub cannot persist messages. But it is different from the KAFKA,KAFKA message can be divided into partition, and stream not. If you want to divide the parition, you have to do in the client, provide a different stream name, the message is hashed to choose which stream to plug. If the reader studies a little bit about the Redis author's other Open source project Disque, it is likely that the author realizes that the Disque project is not active enough, so the disque content is ported to Redis. This is just my guess, not necessarily the original intention of the author. If the reader has any different ideas, you can participate in the discussion in the comment area.

Related reading:

Micro-blogging 6-year Redis optimization process with minimal machine support for trillions of visits

Starter 丨 360 Open source Class Redis storage System: Pika

Redis Combat: How to build an billion-class social platform for Weibo

Codis author Huangdongxu The design of the distributed Redis architecture and the pits that have been trampled

Anti-avalanche design for Redis Architecture: The Art of war behind a website without downtime

Design of the same-way travel cache system: How to build the perfect system of the Redis era (including PPT)

Originating From: Https://mp.weixin.qq.com/s/UUhP_I2wCqUeZV2SaUJm5A

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.