Introduction to Enode Framework: The application of staged Event-driven architecture thought

Source: Internet
Author: User
Tags exit continue mongodb redis thread

In the previous article, a brief introduction of the Enode Framework command service API design ideas. This paper introduces the application of Enode frame to staged event-driven architecture thought. We know from the previous article that command service is highly concurrent access, and we can improve system response performance in addition to executing command and clustering in an asynchronous manner. The most fundamental problem to solve is to handle a single command as quickly as possible. This allows for more command handling within the unit time.

It is helpful to understand the later analysis by pasting the internal implementation architecture diagram of the Enode framework.

I think there are two main ideas to handle command as quickly as possible:

Can parallel processing as far as possible parallel

When the command service receives the command, it sends the command to an available command queue. Then the exit end of the command queue, if only a single thread is handling the command, and if the thread is IO-operated, it must be faster; because the speed at which a single command is handled cannot keep up with the speed at which the command enters the queue, The command queue will continue to grow, resulting in an increase in the delay in command execution. So, the idea is to design multiple threads (the worker in command processor above) to take command from the command queue at the same time, and then handle it. This enables multiple threads to handle different command at the same time.

But the light is not enough, in fact we can do better, that is, command queue can also be designed to multiple. That is, when the command service receives the command, it routes the current command to an available command queue through a command router, and then sends the command to that queue. The advantage of this is that there are several command queues behind our command service (a two command queue on the drawing) that is supported by multiple threads at the exit of each command queue. In this way, we can maximize the squeeze of our server CPU and memory resources. Of course, the framework supports how many command queues the user is allowed to configure, and how many threads each queue handles. This allows the framework consumer to determine how to configure the current server based on the number of CPUs.

Similarly, the domain model generated events (domain event) processing should also be parallel processing; what is the specific treatment? Is the event processor in the picture above. Event processor will contain more than one worker, each worker being a thread. Each worker takes out events from the event queue and then distributes the events further (dispatch) to all event subscribers.

So how does the logic of these parallel executions have access to shared resources?

For each work thread in command processor, it is clear from the schema diagram above that the shared resource is the event store and the memory cache. Event store, we will write events concurrently; Memory cache, we will update the aggregation root concurrently. Therefore, both types of storage must be very good to support high concurrent writes, and to be efficient; After some of my research, I personally feel that mongodb more suitable as a eventstore. The reason is: 1 support cluster and sharding 2 support a unique index, 3 support relational query, 4 high performance, the default is to save the memory to the memory, every 100ms write to the log, every 1 minutes of the memory data into the disk; Based on these 4 points, We can use MongoDB to achieve a more ideal eventstore. and memory cache (memory cache), I think memcached or redis are not bad, are more mature distributed cache. With distributed caching, we don't have to worry about the problem of data being put down because we can partition the data by feature. The idea is similar to the database's sub-Library table. It is important to note that Eventstore must support strict control of concurrency conflicts, MongoDB's unique index ensures this, while memory cache does not support concurrent conflict detection, as long as it can guarantee fast reading and writing according to Key. Since we always update the current state of the aggregate root to memory cache first, and the persistence event to Eventstore has done concurrency conflict detection, the update to memory cache must also be updated in the order of event persistence to memory Cache's. In addition, the event store and the memory cache are actually shared by the entire Web server cluster. But fortunately Mongodb,redis and other products are strong enough to support the horizontal expansion, so we are fully confident that the Web server will continue to increase the situation, but also to Mongodb,redis to do the corresponding horizontal expansion, so that the two places will not create bottlenecks.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.