Enode framework step by step: Design Concept of Message Queue Enode framework step by step: How to embody the idea of event-driven architecture (EDA) IN THE FRAMEWORK

Source: Internet
Author: User
Tags mongodb collection mongodb server redis server

Enode framework series step by stepArticleSeries indexes:

    1. Step by step in Enode framework
    2. Enode framework step by step: How to embody the event-driven architecture (EDA) IN THE FRAMEWORK
    3. The idea and implementation of the Enode framework step by step saga
    4. The overall goal of the Enode framework step by step
    5. Physical deployment of the Enode framework step by step
    6. Enode framework step by step Command Service API design ideas
    7. Enode framework step by step: Application of staged event-driven architecture

Open Source Address: https://github.com/tangxuehua/enode

In the previous article, we briefly introduced the overall implementation ideas within the Enode framework and used the idea of staged event-driven architecture. In the previous article, we learned that Enode has two internal Queues: command queue and event queue. The command sent by the user enters the command queue, domain events generated by domain model enter the event queue, and wait until they are distributed to all event handlers. This article describes how to design these two message queues in the Enode framework.

First, paste the internal implementation architecture diagram of the Enode framework to help you understand the later analysis.

What kind of message queue do we need?

Enode was designed to provide application development based on DDD + cqrs + EDA in a single process.If our business needs to interact with other systems, that is, by interacting with other external systems in event handler, such as broadcasting messages or calling remote interfaces. Maybe in the future, Enode will also support the function of remote message communication. However, remote communication is not supported, which does not mean that Enode can only develop standalone applications. The Enode framework has three types of data to store:

    1. Messages, including command messages and event messages, are currently stored in MongoDB for performance considerations. The reason for message persistence is that messages in the message queue cannot be lost;
    2. The aggregation root is serialized and stored in the memory cache, such as redis or memcached;
    3. An event is an event generated by an aggregation root. The event is stored in eventstore, such as MongoDB;

Well, through the above analysis, we know all the data during the running of the Enode framework, which is stored in MongoDB and redis. These two types of storage are deployed on independent servers and are independent of web servers. Therefore, each web server running the Enode framework is stateless. Therefore, we can easily cluster Web servers. We can add new Web servers at any time when the user traffic increases to improve the system's response capability. Of course, when you find that a single MongoDB server or a single redis server cannot handle the bottleneck as the number of web servers increases, you can also cluster MongoDB and redis, or sharding the data (of course these two methods are not very good, you need to be familiar with MongoDB and redis), so that you can improve the throughput of MongoDB and redis.

Well, the above analysis mainly aims to explain the scope of use of the Enode framework. It is very helpful for us to understand what kind of message queue we need.

Now we know that we do not need distributed message queues at all, such as MSMQ, rabbitmq, and other heavyweight mature message queues that support remote message transmission. The features of the Message Queue we need are:

    • Memory-based message queue;
    • Although based on memory, messages cannot be lost, that is, messages must support persistence;
    • Message Queue performance should be as high as possible;
    • When there is no message in the message queue, the consumer of the queue cannot make the CPU idling. The transfer of the CPU being empty directly causes the CPU to occupy 100%, resulting in the machine being unable to work;
    • Multiple consumer threads must be supported to fetch messages from the queue at the same time, but one message can only be processed by one consumer, that is, one message cannot be taken away by two consumers at the same time, that is, to support concurrent dequeue;
    • A design is required to process a message at least once. Specifically, if the message is not processed successfully when it is removed and then processed by the consumer (the consumer knows whether the message has been processed successfully) or there is no urgent response at all (for example, a power failure occurs at that time). We need a design to have the opportunity to re-consume the message;
    • Because we do not do 100% and will not process one message repeatedly, all our message consumers should try to support idempotent operations, that is, repeated operations will not cause side effects; for example, querying whether a query exists before insertion is a measure that supports idempotence. In this case, the Framework tries its best to provide the logic that supports idempotence. Of course, when designing command handler or event handler, you should also consider idempotence issues as much as possible. Note: Generally, you do not need to consider command handler. The main consideration is event handler. The reason. Let's talk about it in the next article.
Design of memory queue

Memory queue, featuring fast. However, we not only need to be fast, but also support concurrent teams and connections. So it seems that concurrentqueue <t> can meet our requirements. On the one hand, the performance is good, and on the other hand, the built-in support for concurrent operations. However, we hope that when there is no message in the queue, the consumers in the queue cannot let the CPU go empty. The transfer of the CPU will directly cause 100% CPU usage and the machine will not be able to work. Fortunately,. Net also has a set that supports this function, that is, blockingcollection <t>, which can block the current thread when there is no element in the queue. We can instantiate a queue in the following ways:

PrivateBlockingcollection <t> _ queue =NewBlockingcollection <t> (NewConcurrentqueue <t> ());

 

When joining the queue concurrently, we only need to write the followingCodeYou can:

 
_ Queue. Add (Message );

 

When you leave the queue concurrently, you only need:

 
_ Queue. Take ();

 

We can see that concurrentqueue <t> supports queue and concurrent access, while blockingcollection <t> adds the function of blocking threads on this basis.

Is it very simple? After my tests, the performance of blockingcollection <t> has been very good, and it is certainly no problem to join the team 0.1 million times per second, so don't worry about becoming a bottleneck.

Research on disruptor:

If you have heard about the Lmax architecture, you should have heard of the disruptor. The Lmax architecture supports processing million orders per second and is a single thread. Is the speed amazing? If you are interested, you can find out. The Lmax architecture is a fully in memory architecture. All business logic is implemented based on pure memory. The coarse-grained architecture diagram is as follows:

    1. Business logic processor runs completely in memory, or BLP for short;
    2. Input disruptor is a special memory-based annular Queue (based on a ring buffer Data Structure) that receives messages and then enables BLP to process messages;
    3. Output disruptor is also the same queue. It is responsible for releasing BLP-generated events to external components for consumption. After external components are consumed, new messages may be generated and inserted into the input disruptor;

The Lmax architecture is so fast, in addition to the in memory-based architecture, it is also attributed to the disruptor Queue component with latency at the nanosecond level. The following figure compares the latency of the disruptor and the array blocking queue in Java:

NS is a nanosecond. We can see from the data that the delay time of disruptor is less than an order of magnitude faster than that of array blocking queue. Therefore, when the Lmax architecture came out, it was a sensation. I used to be curious about this architecture, but I did not dare to practice it rashly because I did not think clearly about the details.

Through the above analysis, we know that disruptor is also a queue and can completely replace blockingcollection. However, because our blockingcollection has already met our needs, it will not become a bottleneck for the time being, therefore, I have not used disruptor to implement our memory queue. For the Lmax architecture, you can also read this article I wrote earlier.

Persistence of queue messages

We not only need a high-performance and concurrent memory queue, but also support the persistence function of queue messages so that we can ensure that messages are not lost, so that messages can be processed at least once.

When will the message be persistent?

When we send a message to the queue, once it succeeds, we certainly think that the message will not be lost. Therefore, it is obvious that the Message Queue must persist the message before returning it to the queue.

How can we achieve efficient persistence?

First thought:

Sequential write based on TXT text files. The principle is: when a message is queued, the message is serialized as text and then appended to a txt1 file. After the message is processed, the message is appended to another txt2 file. Then, if the current machine is not restarted, the existing messages in the memory queue are unprocessed messages. if the machine is restarted, how do you know which messages have not been processed? It is easy to compare the two text files txt1 and txt2. If a message exists in txt1 but does not exist in txt2, it is considered that the message has not been processed, when the Enode framework is started, read the unprocessed message text in txt1, deserialize it into a message object, reload it into the memory queue, and start processing. This idea is actually quite good. The key point is that this practice has very high performance. Because we know that sequential Writing of text files is very fast, after my test, lines of plain text messages per second are not supported. This means that we can persist messages per second. Of course, we certainly cannot reach this high speed, because the serialization of messages cannot reach this speed, so the bottleneck is on serialization. However, this idea of persistent messages makes it difficult to solve many details. For example, the TXT file is getting bigger and bigger. What should I do? It is difficult to manage and maintain TXT files. What if they are accidentally deleted? Also, how do I compare these two TXT files? Compare by row? No, because the order of message queues is not necessarily the same as that of message queues. For example, if the user sends a command to the queue but finds that the first time a message is sent due to a concurrency conflict, as a result, the command execution fails, so the command will be retried. If the retry succeeds, the command will be persisted, but we know that at this time, its order may already be followed by the subsequent command. Therefore, we cannot compare by line. So we need to compare by message id? Even if this can be done, the comparison process is time-consuming. Assume that txt1 has messages, and txt2 has messages, then, if the IDs are used to compare which 20 million messages in txt1 have not been processed, what areAlgorithmCan it be effectively compared? Therefore, we found that this idea still has many details to consider.

The second idea:

Using nosql to store messages, I think it is more appropriate for MongoDB after some thoughts and comparisons. On the one hand, MongoDB uses the memory preferentially for all access operations, that is, it will not be immediately persisted to the disk. Therefore, the performance is fast. On the other hand, MongoDB supports reliable persistence functions, which can be safely used to persist messages. In terms of performance, although it is not as fast as writing txt, it is basically acceptable. Because after all, we do not put the Command requested by all users on the entire website in a queue. If the number of users on our website is large, we will certainly use a web server cluster, in addition, there will be more than one command queue on each cluster machine. Therefore, we can control the number of messages in a single command queue, messages in a single command queue are stored in different MongoDB collections. Of course, the persistence bottleneck is always I/O, so it is really fast. Only one collection can be designed on an independent MongoDB server, this collection stores messages in a command queue. Other command queue messages are also stored on another MongoDB server. In this way, Io can be parallel, this fundamentally improves the persistence speed. However, the cost is very high. Many machines may be required. How many queue will be required for the entire system. All in all, in terms of persistence, we still have some ways to try and there is room for optimization.

Let's go back and briefly talk about the implementation idea of using MongoDB to persist messages: persistence of messages when joining the queue, and deletion of the message when the queue is out. In this way, when the machine restarts, to view the number of messages in a queue, you only need to use a simple query to return the existing messages in the MongoDB collection. This method is simple and stable, and should be acceptable in terms of performance. Therefore, Enode uses this method to persist messages in all memory queues used by Enode.

Code diagram. If you are interested, see:

     Public   Abstract   Class Queuebase <t>: iqueue <t> Where T: Class  , IMessage {  # Region Private Variables Private  Imessagestore _ messagestore;  Private Blockingcollection <t> _ queue = New Blockingcollection <t> ( New Concurrentqueue <t> ());  Private Readerwriterlockslim _ enqueuelocker =New  Readerwriterlockslim ();  Private Readerwriterlockslim _ dequeuelocker = New  Readerwriterlockslim ();  # Endregion          Public   String Name { Get ; Private   Set  ;}  Protected Ilogger logger { Get ;Private   Set  ;}  Public Queuebase ( String  Name ){  If ( String  . Isnullorempty (name )){  Throw   New Argumentnullexception ( "  Name  " );} Name = Name; _ messagestore = Objectcontainer. Resolve <imessagestore> (); Logger = Objectcontainer. Resolve <iloggerfactory> (). Create (GetType (). Name );}  Public   Void  Initialize () {_ messagestore. initialize (name );  VaR Messages = _ messagestore. getmessages <t> (Name );  Foreach (VaR Message In  Messages) {_ queue. Add (Message);} oninitialized (messages );}  Protected   Virtual   Void Oninitialized (ienumerable <t> Initialqueuemessages ){}  Public   Void  Enqueue (T message) {_ enqueuelocker. atomwrite (() =>{_ Messagestore. addmessage (name, message); _ queue. Add (Message );});}  Public  T dequeue (){  Return  _ Queue. Take ();}  Public   Void  Complete (T message) {_ dequeuelocker. atomwrite (() => {_ Messagestore. removemessage (name, message );});}} 

View codeHow to ensure that a message is processed at least once

The idea should be easy to think of, that is, to first extract the message from the memory queue dequeue and then hand it over to the consumer for processing. Then, the consumer will tell us whether the current message has been processed. If not, retry processing is required. If the message persists after several retries, the message cannot be discarded, but the message cannot be processed endlessly, therefore, you need to throw the message to another local pure memory queue dedicated for processing the Retry. If the message is successfully processed, delete the message from the persistent device. Let's take a look at the Code:

     Private   Void  Processmessage (tmessageexecutor messageexecutor ){  VaR Message = _ Bindingqueue. dequeue ();  If (Message! = Null  ) {Processmessagerecursively (messageexecutor, message,  0 , 3 );}}  Private   Void Processmessagerecursively (tmessageexecutor messageexecutor, tmessage message, Int Retriedcount, Int  Maxretrycount ){  VaR Result = executemessage (messageexecutor, message ); //  This indicates that messages are being consumed (that is, processed ).  //  If the processing is successful, the notification queue will delete the message from the persistent device by calling the complete method.          If (Result =Messageexecuteresult. executed) {_ bindingqueue. Complete (Message );}  //  If the processing fails, it will be retried several times. Currently, it is three times. If it still fails, it will be thrown into a retry queue for permanent timed retry.          Else   If (Result = Messageexecuteresult. Failed ){  If (Retriedcount < Maxretrycount) {_ logger. infoformat (  "  Retring to handle message: {0} For {1} Times.  " , Message. tostring (), retriedcount +1  ); Processmessagerecursively (messageexecutor, message, retriedcount + 1  , Maxretrycount );}  Else  {  //  Here is a retry queue for permanent scheduled retry. Currently, it is a retry every five seconds. _ retryqueue is a simple memory queue and a blockingcollection <t>  _ Retryqueue. Add (Message );}}} 

The Code should be clear, so I will not explain it more.

Summary:

This article mainly introduces the design idea of message queue in the Enode framework. The logic is similar because there are command queue and event queue in Enode; so I would like to discuss how to abstract and design these queue, and I have removed repeated code. But it's not too early. Let's talk about it in detail next time.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.