As a key component to realize scalable and scalable distributed system, distributed message system needs high throughput and high availability. But when it comes to the design of the message system, you can't avoid two questions: the order of messages, the problem of repetition of messages
ROCKETMQ as Ali Open source of a high-performance, high throughput of the message middleware, it is how to solve these two problems. What are the key features of ROCKETMQ? What is the principle of its implementation. Key features and their implementation principles one, sequential message
Message ordering means that you can consume the message in the order in which it is sent. For example: an order produces 3 messages, namely order creation, order payment, order completion. In order to consume, it is meaningful to consume sequentially. Meanwhile, multiple orders can be consumed concurrently. First look at the following example:
If the producer produced 2 messages: M1, M2, to ensure the order of the two messages, what should be done. That's probably what's in your mind:
You might be able to secure message order in this way
Assume that M1 sent to the s1,m2 sent to S2, if you want to ensure that M1 before the M2 is consumed, then need to M1 to reach the consumer end is consumed, notify S2, and then S2 sent M2 to the consumer end.
The problem with this model is that if M1 and M2 are sent to two servers separately, there is no guarantee that M1 will reach the MQ cluster first and that M1 be consumed first. In other words, if the M2 before M1 to reach the MQ cluster, or even M2 is consumed, M1 to reach the consumer end, then the message is disorderly, stating that the above model is not guaranteed message order. How to guarantee the order of messages in an MQ cluster. An easy way is to send M1, M2 to the same server:
Guaranteed message order, your improved method
This can guarantee M1 before M2 arrives Mqserver (producer waits for M1 sends to send M2 again), according to first achieves the principle which consumes first, the M1 will be consumed before M2, this guarantees the order of the message.
This model can only theoretically guarantee the order of messages, and the following problems may be encountered in the actual scenario:
Network latency issues
There is a problem with network latency as long as the message is sent from one server to another. As shown in the figure above, if the sending M1 is longer than the time it takes to send M2, then M2 will still be consumed first and still cannot guarantee the order of the messages. Even if M1 and M2 reach the consumer side at the same time, it is still possible that M2 will be consumed prior to M1 because it is not clear about the load of consumer 1 and consumer end 2.
Then how to solve this problem. The M1 and M2 are sent to the same consumer, and after the M1 is sent, it is necessary to send the M2 after the consumer side responds successfully.
Smart you may have thought of another problem: if M1 is sent to the consumer end, the consumer 1 is not responding, that is, continue to send M2, or resend M1. Generally in order to ensure that the message must be consumed, will certainly choose to send M1 to another consumer 2, as shown in the following figure.
Correct posture to ensure message order
Such a model strictly guarantees the order of messages, careful you will still find the problem, consumer 1 does not respond to the server in two cases, one is M1 did not arrive (data lost in the network transmission), another consumer has been consumed M1 and has sent response messages, but MQ The server side did not receive it. If it is the second case, M1, it will cause M1 to be repeated consumption. It also introduces the second question we want to say, the message repeats the question, this later text will explain in detail.
Looking back at the message order problem, strict sequential messages are easy to understand, and can be dealt with simply by the way described in the article. To sum up, a simple and workable way to achieve strict sequential messaging is to:
Guaranteed producer-Mqserver-consumer is a one-to-one relationship
This design, though simple and easy to do, however, there are also some serious problems, such as: the degree of parallelism will become the message system bottleneck (insufficient throughput) More exception handling, such as: As long as the consumer side of the problem, will cause the entire processing process blocking, we have to spend more effort to solve the blocking problem.
But our ultimate goal is to cluster high fault tolerance and high throughput. This seems to be a pair of irreconcilable contradictions, then how Ali is resolved.
The easiest way to solve a computer problem in the world: "Just" doesn't need to be solved. --Shen Yu
There are some problems that seem important, but in fact we can circumvent them by reasonable design or by decomposing the problem. It is not only inefficient but also wasteful to spend time in solving the problem itself. From this point of view the order of the message, we can draw two conclusions: no attention to the chaos of the application of the actual large number of queue disorder does not mean that the message is unordered
So it is not a more reasonable way to secure the order of messages from the business level than to rely on the message system.
Finally, we analyze rocketmq How to implement the Send order message from the source point of view.
ROCKETMQ determines which queue the message is sent to (the load balancing policy) by polling all queues. For example, in the following example, a message with the same order number is sent to the same queue:
ROCKETMQ through an algorithm implemented in Messagequeueselector to determine which queue the message is sent to
//ROCKETMQ default provides two Messagequeueselector implementations: Random/hash
/ /You can, of course, implement your own messagequeueselector based on your business to decide which policy to send to message queues
sendresult sendresult = Producer.send (msg, new Messagequeueselector () {
@Override public
MessageQueue Select (list<messagequeue> MQS, MSG, Object arg) {
integer id = (integer) arg;
int index = ID% mqs.size ();
Return Mqs.get (index);
}
}, OrderId);
After acquiring the routing information, a queue is selected based on the algorithm implemented by the Messagequeueselector, and the same OrderID gets the same queue.
Private Sendresult Send () {
//Get topic Routing information
topicpublishinfo topicpublishinfo = This.trytofindtopicpublishinfo (Msg.gettopic ());
if (topicpublishinfo!= null && Topicpublishinfo.ok ()) {
MessageQueue MQ = null;
According to our algorithm, select a Send queue
//here arg = orderId
MQ = Selector.select (Topicpublishinfo.getmessagequeuelist (), MSG, ARG) ;
if (MQ!= null) {return
This.sendkernelimpl (msg, MQ, Communicationmode, Sendcallback, timeout);
}
}
Ii. Duplication of messages
When solving the message order problem, a new problem is introduced, that is, message duplication. So how does ROCKETMQ solve the problem of duplication of messages? Or "Exactly" does not solve.
The root cause for message duplication is that the network is unreachable. This problem cannot be avoided as long as the data is exchanged over the network. So the solution to this problem is to get around the problem. The question then becomes: If the consumer receives two of the same message, what should be done. The business logic of consuming-side processing message keeps power equal guarantee each message has a unique number and guarantees that the message processing is successful and that the log to the heavy table appears simultaneously
The 1th is very good understanding, as long as the power, no matter how many duplicate messages, the final processing results are the same. The 2nd principle is to use a log table to record the ID of the message that has been processed successfully, and if the new message ID is already in the log table, then the message will not be processed.
The 1th solution, obviously should be implemented on the consumer side, is not part of the message system to implement the functionality. The 2nd article can be implemented by message system or business end. Normally, the probability of duplicate messages is very small, if implemented by the message system, will certainly affect the throughput and high availability of the message system, so it is best to deal with the problem of message duplication by the business side, which is why ROCKETMQ does not resolve the problem of duplication of messages.
ROCKETMQ does not guarantee that messages will not repeat, if your business needs to ensure strict not repeat messages, you need to be in the business side of the heavy. Third, the transaction message
ROCKETMQ In addition to supporting ordinary messages, sequential messages, and also support transactional messages. Let's start by discussing what transactional messages are and the need to support transactional messages. Let's take a transfer scenario as an example to illustrate the problem: Bob transfers 100 bucks to Smith.
In a stand-alone environment, the execution of a transaction is probably the following:
Map of transfer transaction under stand-alone environment
When the user grows to a certain extent, Bob and Smith's account and balance information is no longer on the same server, then the above process becomes this:
The map of transfer transaction in cluster environment
At this point, you will find that the same is a transfer of business, in the cluster environment, time-consuming incredibly multiply, this is obviously unacceptable. So how to circumvent this problem.
Large transaction = Small Transaction + asynchronous
Split a large transaction into small transactions asynchronously. This can basically optimize the execution efficiency of a cross machine transaction to be consistent with a single computer. The transaction of the transfer can be decomposed into the following two small transactions:
Small Transaction + Asynchronous message
The implementation of the local transaction (Bob account deduction) and the sending of asynchronous messages should ensure the success or failure at the same time, that is, the deduction is successful, send the message must be successful, if the deduction failed, you can no longer send messages. The question is: Shall we deduct the money first or send the message first?
First look at the first message to send the situation, the approximate schematic is as follows:
Transactional message: Send Message First
The problem is that if the message is sent successfully, but the deduction fails, the consumer will consume the message and then add money to the Smith account.
First send a message not, then the first deduction, the approximate schematic is as follows:
Transaction Message-First deduction
The problem is similar to the above: if the deduction is successful, sending a message fails, Bob will be deducted, but the Smith account is not added money.
Perhaps there are many ways to solve this problem, such as: Send the message directly to the transaction of Bob Deduction, if the sending failure, throw an exception, transaction rollback. Such a process also conforms to the principle that "happens" does not need to be resolved.
Here's a note: If you're using spring to manage things, you can put the logic of sending a message into local things, send a message failure to throw an exception, and spring catches the exception and rolls back the thing to ensure the atomicity of local things and sending messages.
ROCKETMQ support transactional messages, let's see how ROCKETMQ is implemented.
ROCKETMQ implementation Send Transaction message
ROCKETMQ the first stage to send prepared message, will get the address of the message, the second phase of the implementation of local things, the third phase through the first phase of the address to access the message, and modify the status of the message.
Careful you may find the problem again, if the confirmation message sent failed to do. ROCKETMQ periodically scans for messages in the message cluster, and if prepared messages are found, it confirms to the message sender (producer) Whether Bob's money has been reduced or not. If you lose the rollback or continue to send the confirmation message. ROCKETMQ determines whether to rollback or continue to send a confirmation message based on the policy set by the sender. This ensures that both the message send and the local transaction succeed or fail at the same time.
So let's look at the ROCKETMQ source code, is how to deal with transaction messages. The part of the client that sends the transaction message (see the Complete code: Rocketmq-example Project Com.alibaba.rocketmq.example.transaction.TransactionProducer):
============================= a series of preparations for sending transaction messages ========================================
//Pending transactions, MQ server back up clients
//That is, when Rocketmq discovers the ' prepared message ', it will decide the transaction
according to the strategy that listener implements Transactionchecklistener Transactionchecklistener = new Transactionchecklistenerimpl ();
The producer
Transactionmqproducer producer = new Transactionmqproducer ("GroupName") that constructs the transaction message;
Set transaction decision Processing class
Producer.settransactionchecklistener (Transactionchecklistener);
The processing logic of the local transaction is equivalent to the logical
Transactionexecuterimpl tranexecuter = new Transactionexecuterimpl () that checks the Bob account and the money in the example.
Producer.start ()
///construction MSG, omitting construction parameters message
msg = new Message (...);
Send Message
sendresult Sendresult = producer.sendmessageintransaction (msg, tranexecuter, null);
Producer.shutdown ();
Then look at the source code of the Sendmessageintransaction method, which is divided into 3 stages: sending prepared messages, performing local transactions, sending acknowledgement messages.
================================ Transaction Message sending process ============================================= public
Transactionsendresult sendmessageintransaction (...) {
//logical code, not actual code
//1. Send Message
Sendresult = This.send (msg);
Sendresult.getsendstatus () = = Send_ok
//2. If the message is sent successfully, the local transaction unit associated with the message is processed
localtransactionstate Localtransactionstate = Tranexecuter.executelocaltransactionbranch (msg, ARG);
3. End Transaction
this.endtransaction (Sendresult, Localtransactionstate, localexception);
}
The Endtransaction method sends the request to the broker (MQ server) to update the final state of the transaction message: The prepared message is found according to Sendresult, sendresult the ID that contains the transaction message Update the final status of a message based on localtransaction
If the Endtransaction method fails and the data is not sent to broker, causing the status update of the transactional message to fail, broker will have a callback thread timed (default 1 minutes) to scan each stored transaction state's Table file, and if the message that has been committed or rolled back is skipped directly, If it is a prepared state, a checktransaction request will be initiated to producer, and producer will invoke Defaultmqproducerimpl.checktransactionstate () method to handle a timed callback request for broker, and Checktransactionstate invokes the decision method of our transaction setting to decide whether to rollback the transaction or continue execution. The last call Endtransactiononeway let broker to update the final state of the message.
Back to the example of the transfer, if Bob's account balance has been reduced, and the message has been sent successfully, Smith end of the consumption of this message, this time there will be consumer failure and consumption timeout two problems, the idea of resolving the timeout problem is to try until the consumer end of the consumer message success, The whole process may appear the problem of duplication of messages, according to the previous ideas to solve.
Consumer transaction messages
This can basically solve the consumer side timeout problem, but what if the consumer fails. The solution that Ali offers us is: manual solution. You can consider, according to the process of the transaction, for some reason Smith plus failed, then need to roll back the whole process. If the message system to implement this rollback process, the system complexity will be greatly improved, and very easy to appear bugs, the probability of the occurrence of bugs is much greater than the probability of consumption failure. This is also ROCKETMQ currently not solve this problem, in the design of the implementation of the message system, we need to measure whether it is worth the cost to solve such a very small probability of such a problem, this is also in solving difficult problems need a lot of thinking place.
20160321 Add: The implementation of the transaction message is removed from the 3.2.6 version, so this version does not support transactional messages, depending on the ROCKETMQ issues:
Https://github.com/alibaba/RocketMQ/issues/65
https://github.com/alibaba/RocketMQ/issues/138
https://github.com/alibaba/RocketMQ/issues/156 Four, producer how to send a message
Producer The load balancing of the sender by polling all queues under a topic, as shown in the following illustration:
Producer Send Message Load balancing
First, analyze the ROCKETMQ client to send the message source:
Construct producer
Defaultmqproducer producer = new Defaultmqproducer ("Producergroupname");
Initialization of producer, the entire application lifecycle, only need to initialize 1 times
producer.start ();
Construct
message Messages msg = new Message ("TopicTest1",//Topic
"Taga",//Tag: To label news, to differentiate a class of messages, to be null
" OrderID188 ",//Key: Custom key, can be used to go heavy, can be null
(" Hello Metaq "). GetBytes ());//body: Message content
//Send message and return result
Sendresult Sendresult = producer.send (msg);
Clean up resources, turn off network connections, and log off your
producer.shutdown ();
Throughout the application lifecycle, the producer needs to invoke the Start method to initialize, initializing the primary completed task: If you do not specify an namesrv address, the start timed task will be automatically addressed: Update Namesrv address, update topic routing information from Namsrv, Clean up already hung broker, send heartbeat to all broker ... Start a load-balanced service
When the initialization is complete, the message is sent and the main code for sending the message is as follows:
Private Sendresult Senddefaultimpl (msg,......) {
//check whether the state of the producer is running
this.makesurestateok ();
Check whether MSG is legal: null, Topic,body is empty, body is super long
validators.checkmessage (msg, this.defaultmqproducer);
Gets the topic routing information
topicpublishinfo topicpublishinfo = This.trytofindtopicpublishinfo (Msg.gettopic ());
Select a message queue from routing information
MessageQueue MQ = Topicpublishinfo.selectonemessagequeue (lastbrokername);
Send a message to the queue
Sendresult = This.sendkernelimpl (msg, MQ, Communicationmode, Sendcallback, timeout);
Two methods that need attention in code Trytofindtopicpublishinfo and Selectonemessagequeue. As mentioned earlier, when producer is initialized, the scheduled task is started to obtain routing information and is updated to the local cache, so Trytofindtopicpublishinfo first obtains topic routing information from the cache, if it is not obtained, You will go to Namesrv to get the routing information yourself. The Selectonemessagequeue method returns a queue in the form of polling to achieve load balancing purposes.
If producer sends a message, it automatically retries and retries the policy: Retry count < retrytimeswhensendfailed (configurable) total time consuming (including retry n times) < Sendmsgtimeout (parameters passed in when sending a message) meet the above two conditions, producer will select another queue to send messages five, message store
The ROCKETMQ message store is performed by consume queue and commit log. 1. Consume Queue
The consume queue is a logical queue of messages that corresponds to a dictionary directory that specifies the location of the message on the physical file commit log.
We can specify consumequeue and Commitlog stored directories in the configuration
Each queue under each topic has a corresponding consumequeue file, such as:
${rocketmq.home}/store/consumequeue/${topicname}/${queueid}/${filename}
Consume queue file organization, as shown in the figure:
Consume queue file organization diagram according to topic and Queueid to organize the file, the figure Topica has two queues 0, 1, then Topica and queueid=0 form a consumequeue,topica and queueid= 1 Make up another consumequeue. Group retry queues According to the GroupName of the consumer end, and if consumption fails, messages are sent to the retry queue, such as the%retry%consumergroupa in the figure. Group dead-letter queues according to the GroupName of the consumer side, and if the consumer end consumption fails and then fails after the specified number of times, it is sent to the dead-letter queue, such as the%dlq%consumergroupa in the figure.
The Badmail queue (Dead letter queue) is typically used to store messages that cannot be delivered for some reason, such as processing failures or messages that have expired.
The storage unit in the consume queue is a 20-byte fixed-length binary data that is read sequentially, as shown in the following illustration:
Consumequeue file storage cell Format Commitlog offset is the actual offset of this message in the commit log file the size of the message in the amount stored Hashcode the hash value of the tag that stores the message: Message filtering primarily for subscriptions (if tag is specified when subscribing, it is quickly found to subscribe according to hashcode) 2, Commit Log
Commitlog: The physical files that are stored in the message, the Commitlog on each broker is shared by all of the local queue, without any distinction made.
The default location for the file is the following, which can still be modified by the configuration file:
${user.home} \store\${commitlog}\${filename}
Commitlog message storage unit length is not fixed, file order write, Random read. The storage structure of the message is stored in the following table, sorted by number sequence and the corresponding number of the numbers.
Commit Log storage unit Structure Figure 3, message store implementation
The message store implementation, more complex, but also worthy of in-depth understanding, will be a separate written to analyze (currently collecting material), this section only in code to explain the specific process.
Set the storage Time Msg.setstoretimestamp (System.currenttimemillis ());
Set the message body Body CRC (consider the most appropriate setting MSG.SETBODYCRC (UTILALL.CRC32 ());
Storestatsservice Storestatsservice = This.defaultMessageStore.getStoreStatsService ();
Synchronized (this) {Long beginlocktimestamp = This.defaultMessageStore.getSystemClock (). now ();
Here is the settings are stored timestamp, in order to ensure a orderly global msg.setstoretimestamp (beginlocktimestamp);
Mapedfile: Manipulating the mapping of physical files in memory and persisting memory data into physical files Mapedfile mapedfile = This.mapedFileQueue.getLastMapedFile ();
Append message to file Commitlog result = Mapedfile.appendmessage (msg, this.appendmessagecallback);
Switch (Result.getstatus ()) {case put_ok:break; Case End_of_file://Create A new FILE, re-write the message Mapedfile = This.mapedFileQueue.getLastMape
Dfile ();
result = Mapedfile.appendmessage (msg, this.appendmessagecallback); Break;
Dispatchrequest dispatchrequest = new Dispatchrequest (topic,//1 queueid,//2
Result.getwroteoffset (),//3 Result.getwrotebytes (),//4 tagscode,//5
Msg.getstoretimestamp (),//6 Result.getlogicsoffset (),//7 Msg.getkeys (),//8 /** * Transaction/Msg.getsysflag (),//9 Msg.get Preparedtransactionoffset ());//10//1. Distribute message location to Consumequeue//2. Distribute to Indexservice to establish index This.defaultmessagestore.
Putdispatchrequest (dispatchrequest);
}
4, the index file of the message
If a message contains a key value, the message index is stored using Indexfile, and the contents of the file are structured as shown:
Message index
Index files are mainly used to query messages according to key, the process is mainly: according to the query key Hashcode%slotnum get the specific slot position (Slotnum is an index file contains the maximum number of slots, such as the figure shown in slotnum=5000000) Finds the last item in the list of indexed items (in reverse order, Slotvalue always point to the latest index entry), based on the value of the Slotvalue (slot position) traverse the list of index entries to return the result set within the query time range (default one of the maximum returned 32 records) six, message subscriptions
There are two modes of ROCKETMQ message subscription, one is push mode, that is, Mqserver actively push to the consumer side, the other is the pull mode, that is, when the consumer needs to take the initiative to mqserver pull. But in the concrete realization, the push and the pull mode are to use the consumer end active pull the way.
First look at the consumer side of the load balance:
Load Balancing at consumer end
The consumer ends up with a rebalanceservice thread, which is based on all the queue loads under topic: traversing all topic under consumer, and then obtaining the same topic and topic according to the consumer subscription all messages All consumer under group then allocate the consumption queue according to the specific allocation policy, and the assigned policies include: average allocation, consumption end configuration, etc.
As shown in the figure above: if there are 5 queues, 2 consumer, then the first consumer consumes 3 queues, and the second consumer consumes 2 queues. Here is the average allocation strategy, which is similar to the process of paging, topic all the queues below are records, the number of consumer is equivalent to total pages, then how many records per page, is similar to a consumer will consume which queue.
Through such a strategy to achieve a general average consumption, such a design can also be a very high level of expansion of consumer to improve the consumption capacity.
The push mode on the consumer side is implemented through a long polling pattern, just like the following figure:
Push mode schematic
The consumer end sends a pull message request to broker every once in a while, after the broker receives the pull request, if has the message immediately returns the data, the consumer end receives the message which returns, then recalls the consumer to set the listener method. If broker has no data in the message queue when it receives the pull request, the broker end blocks the request until there is a data pass or timeout before returning.
Of course, the consumer end is a thread that sends pullrequest in the blocked queue linkedblockingqueue<pullrequest> to broker pull messages to prevent consumer from being blocked uniformly. And the broker end, when received consumer pullrequest, if found no message, will throw pullrequest into the concurrenthashmap cache. At startup, Broker initiates a thread to constantly remove pullrequest checks from the CONCURRENTHASHMAP until the data is returned. Vii. Other characteristics of ROCKETMQ
The front of the 6 features are basically up to, want to understand, but also need to see a lot of the source code, a lot in the actual application. Of course, in addition to the features already mentioned, ROCKETMQ also supports: Brush-disk strategy for timed message messages active synchronization strategy: synchronous dual-write, asynchronous replication mass message accumulation capacity for efficient communication ....
Many of the design ideas and solutions involved are worthy of our in-depth study: message storage Design: Not only to meet the accumulation of mass information capacity, but also to meet the very fast query efficiency, and to ensure the efficiency of writing. Efficient communication Component design: High throughput, millisecond message delivery capabilities are inseparable from efficient communication. ....... ROCKETMQ Best Practice one, producer best practices
1, an application as much as possible with a Topic, message subtype with tags to identify, tags can be set by the application of freedom. Only send the message set tags, consumers in the subscription message, can use tags in broker do message filtering.
2, each message at the business level of the unique identification code, to set to the keys field to facilitate the future positioning message loss problem. Because it is a hash index, it is important to ensure that the key is as unique as possible, thus avoiding potential hash conflicts.
3, the message to send success or failure, to print the message log, be sure to print the Sendresult and key fields.
4, for the message can not be lost application, it is necessary to have a message to restart the mechanism. For example: Message sent failed, stored to the database, can have a timed program to try to resend or manually trigger the resend.
5, some applications if not concerned about whether the message was sent successfully, please use the Sendoneway method to send the message directly. Second, consumer best practices
1, the consumption process to achieve power (ie, consumption end to heavy)
2, as far as possible the use of bulk mode of consumption, can greatly improve the consumption throughput.
3, optimize each message consumption process three, other configuration
The autocreatetopicenable should be turned off on the line, that is, set to False in the configuration file.
ROCKETMQ Gets the routing information first when it sends a message. If this is a new message, because Mqserver has not yet created the corresponding topic, this time, if the above configuration is open, it will return the default topic (ROCKETMQ will create TBW102 on each broker named topic) routing information, Producer then selects a broker to send the message, and the selected broker automatically creates topic when the message is stored and the topic of the message is not created. The consequence: all future topic messages will be sent to this broker to achieve the goal of load balancing.
Therefore, based on the current ROCKETMQ design, it is recommended to turn off the automatic creation of topic, and then manually create the topic based on the size of the message volume. ROCKETMQ Design Related
The design assumption of ROCKETMQ:
Every PC machine may be down and not serviced
Any cluster may have insufficient capacity to handle
The worst is sure to happen.
Intranet environments require low latency to provide the best user experience
Key design of ROCKETMQ:
Distributed clustering
Strong data security
Massive data accumulation
Millisecond post delay (push-pull mode)
This is the hypothetical premise of the ROCKETMQ at design time and the effect that needs to arrive. I think these assumptions apply to all system designs. With the increase in our system services, each developer should pay attention to their own program whether there is a single point of failure, if the hanging should be how to restore, can be a good level of expansion, external interface is efficient enough, their own management data is safe enough ... A lot of norms of their own design, in order to develop efficient and robust procedures. Reference ROCKETMQ User's Guide ROCKETMQ introduction to ROCKETMQ Best Practices Ali distributed Open Message Service (ONS) principle and Practice 2 Ali Distributed open Message Service (ONS) principle and Practice 3 ROCKETMQ principle Analysis
Author: Chen Chuan
Link: http://www.jianshu.com/p/453c6e7ff81c
Source: Jianshu