[Switch] Message Queue Application Scenario and Message Queue scenario
I. Message Queue Overview
Message queue middleware is an important component in a distributed system. It mainly solves problems such as application coupling, asynchronous messages, and traffic charge splitting. High performance, high availability, scalability, and eventual consistency architecture. Is an indispensable middleware for large-scale distributed systems.
Currently, many message queues are used in the production environment, such as ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ, and RocketMQ.
Ii. Message Queue application scenarios
The following describes common use cases of message queues in practical applications. Asynchronous processing, application decoupling, traffic spike, and message communication.
2.1 asynchronous Processing
Scenario Description: After a user registers, the user needs to send a registration email and a registration text message. There are two traditional methods: serial mode and parallel mode.
(1) serial mode: after the registration information is successfully written into the database, the registration email is sent and a registration text message is sent. After all the preceding three tasks are completed, return them to the client. (Architecture KKQ: 466097527, welcome to join)
(2) Parallel Mode: after the registration information is successfully written into the database, a registration text message is sent at the same time as the registration email. After the preceding three tasks are completed, return them to the client. The difference from serial is that parallel processing can increase the processing time.
Assume that each of the three business nodes uses 50 ms, regardless of other overhead such as the network, the serial time is 150 ms, and the parallel time may be 100 ms.
Because the number of requests processed by the CPU per unit time is certain, it is assumed that the throughput within 1 second is 100. The number of requests that the CPU can process in the serial mode within one second is 7 (1000/150 ). The number of requests processed in parallel is 10 (1000/100 ).
Summary: as described in the above case, the performance (concurrency, throughput, and response time) of the traditional system has a bottleneck. How can this problem be solved?
When a message queue is introduced, the Service Logic is not required and processed asynchronously. The transformed architecture is as follows:
According to the above conventions, the user's response time is equivalent to the time when the registration information is written into the database, that is, 50 milliseconds. Register an email, send an SMS to the message queue, and then return the message directly. Therefore, the Message Queue can be written quickly and ignored. Therefore, the response time of the user may be 50 milliseconds. Therefore, after the architecture changes, the system throughput is increased to 20 QPS per second. It is three times higher than serial, and twice higher than parallel.
2.2 Application decoupling
Scenario Description: After a user places an order, the order system must notify the inventory system. Traditionally, the order system calls the inventory system interface. For example: (Architecture KKQ: 466097527, welcome to join)
Disadvantages of the traditional model:
1) if the inventory system cannot be accessed, order inventory reduction will fail, resulting in order failure;
2) coupling between the order system and the inventory system;
How can we solve the above problems? The solution after message queue is introduced, for example:
- Order System: After a user places an order, the order system completes persistent processing, writes the message to the message queue, and returns the result that the order is placed successfully.
- Inventory System: subscribe to the order message. Pull/push is used to obtain the order information. The inventory system performs inventory operations based on the order information.
- Assume that the inventory system cannot be used normally when placing an order. It does not affect the normal order, because after the order is placed, the order system no longer cares about other subsequent operations when writing it into the message queue. This decouples the application of the Order System from the inventory system.
2.3 traffic charge Cutting
Traffic spike is also a common scenario in message queues. It is widely used in seckilling or group snatching activities.
Application Scenario: The second kill activity usually causes traffic to surge due to excessive traffic, and the application fails. To solve this problem, you generally need to add a message queue to the application front-end.
2.4 log processing
Log Processing refers to the use of message queues in log processing, such as Kafka applications, to solve the problem of massive log transmission. The architecture is simplified as follows: (Architecture KKQ: 466097527, welcome to join)
- The log collection client collects log data and regularly writes data to the Kafka queue;
- Kafka message queue, responsible for receiving, storing and forwarding log data;
- Log processing applications: subscribe to and consume log data in the kafka queue;
The following are examples of Sina kafka log processing: