Guaranteeing message processing (Storm information processing assurance mechanism)

Source: Internet
Author: User
Tags ack emit joins split time limit unique id

Storm guarantees that every message sent out by spout will be fully processed. This article describes how storm is implementing this assurance mechanism and how we benefit from Storm's reliability as a storm user.


meaning of the message being "fully processed"

A tuple (tuple) sent out by spout triggers the generation of more tuples downstream. Let's look at your topology of this stream word count:

Topologybuilder builder = new Topologybuilder ();
Builder.setspout ("Sentences", New Kestrelspout ("kestrel.backtype.com",
                                               22133,
                                               "Sentence_queue",
                                               New Stringscheme ()));
Builder.setbolt ("Split", New Splitsentence (), ten)
        . shufflegrouping ("sentences");
Builder.setbolt ("Count", New WordCount (), +)
        . Fieldsgrouping ("Split", New Fields ("word"));
This topology reads the sentences from the Kestrel queue and divides the sentences into words, finally sending the number of occurrences of each word. A tuple emitted from spout will generate more tuples in the downstream trigger: each word in the sentence forms a tuple and the count of each subsequent word forms a new tuple. The following is a tuple-composed message tree, or a tuple tree:


When the tuple tree is fully used and all the messages in the tree have been completely processed, Storm thinks that the tuple (the tuple is the root node in the tree) spout is "fully processed". And when all the messages in this tree are not fully processed within a certain time, Storm thinks that the tuple is not fully processed. The time limit for processing can be set by Config.topology_message_timeout_secs, which is 30 seconds by default.

What happens when a message is fully processed or not fully processed.

To understand this problem, let's look at the life cycle of the tuple that spout sends out first. Here is a reference to the interface that spouts needs to implement:

Public interface Ispout extends Serializable {
    void open (Map conf, topologycontext context, Spoutoutputcollector coll Ector);
    void Close ();
    void Nexttuple ();
    void Ack (Object msgId);
    void Fail (Object msgId);
}
First, storm requests a tuple by calling the Nexttuple method in spout. Spout uses an instance of Spoutoutputcollector provided in the Open method to send a tuple (instance of Spoutoutputcollector) to its output stream. Emit ()). When a tuple is sent, spout provides a "message ID" that will then be used to identify the corresponding tuple using the "message ID". For example: Kestrelspout reads a message from the Kestrel queue and sends the tuple with the ID of the message in Kestrel as the "message ID" of the tuple. The Spoutoutputcollector instance _collector is sent as follows:

_collector.emit (New Values ("Field1", "Field2", 3), msgId);
Next, the tuple is sent to the consuming Bolts,storm to track the message tree generated by the tuple as the root node. If Storm detects that a tuple is "fully processed", Storm will invoke the start spout task based on the message ID, which is designed to be concurrent, and a spout may produce multiple spout tasks. The Ack method in which each task will produce a tuple). Similarly, if a tuple is not "fully processed" within a specified time, storm will call the Fail method. The call ACK or fail is made on the spout task that generated the tuple. As a result, a spout produces multiple tasks at execution time, and a tuple's ACK or fail will not be completed without the task of generating the tuple.

We use Kestrelspout to see what spout has done in the message processing assurance mechanism.

When Kestrelspout gets a message from the Kestrel queue, it "opens" (open) the message. This means that the message has not been extracted from the queue, but rather is in a state of "pending" (pending), waiting to make sure that the message is actually processed. Messages in the "Pending" state are not sent to other consumers of the queue. In addition, if the client disconnects, all messages in the "Pending" state are returned to the normal state in the queue. When a message is "opend" (open), Kestrel will provide the data for the message and the unique ID of the message to the appropriate client. Kestrelspout uses this ID as the "message ID" of the tuple.

When an ACK or fail in kestrelspout is called, Kestrelspout will send an ACK or fail message with a message ID to Kestrel. So that kestrel the message out of the queue or let the message return to its normal state and wait for the next "open".

Storm's Reliability API

As a user, we need to do two things when we use storm's reliability capabilities. 1. When we establish a new connection in the message tree, we need to tell storm. 2. We also need to tell storm when we have processed a single tuple (where the tuple is the tuple that was generated during the whole process and not just the root node). With the above two points, Storm can detect if the tuple tree is processing and invoke the corresponding ACK or fail. The Storm API provides a concise solution to accomplish these tasks.

Determines that a connection in the tuple tree is called "anchoring" (anchored). The anchoring will be completed when a new tuple is sent. Let's take a look at the bolt example below. This bolt divides a tuple containing a sentence into multiple word tuples:

public class Splitsentence extends Baserichbolt {
        outputcollector _collector;

        public void Prepare (MAP conf, topologycontext context, Outputcollector collector) {
            _collector = collector;
        } Public

        void execute (tuple tuple) {
            String sentence = tuple.getstring (0);
            For (String word:sentence.split ("")) {
                _collector.emit (tuple, new Values (word));
            }
            _collector.ack (tuple);
        }

        public void Declareoutputfields (Outputfieldsdeclarer declarer) {
            Declarer.declare (new fields ("word"));
        }        
    }
Each word tuple is anchored (anchored) by the first parameter in _collector.emit (Tuple,new Values (Word)) (the tuple in this example). Because the word tuple has been anchored, if the downstream tuple fails to process, the spout tuple at the root node will be re-sent.

If we send a tuple in the form of _collector.emit (word), it is clear that the tuple sent in this way is not anchored. If downstream processing fails, it will not be re-sent. We can choose the appropriate method according to the specific fault-tolerant needs.

A tuple of an output can be anchored by multiple input tuples. This is useful when a stream connection (join) or aggregation (aggregation) is required. A multi-anchored (multi-anchored) tuple processing failure will likely cause multiple tuple spouts to be re-sent. Multiple anchors (multi-anchored) can be accomplished in the following ways:

List<tuple> anchors = new arraylist<tuple> ();
Anchors.add (tuple1);
Anchors.add (tuple2);
_collector.emit (Anchors, new Values (1, 2, 3));
Multiple anchors Add the output tuple to a multi-tuple tree (in fact, it is not a tree structure, but a directed acyclic graph Dag, in fact, the tree is a special dag) can be called Tuple-dag:

Storm is supported for dags or trees, but only the tree structure supports pre-release (not quite).

Anchoring allows us to materialize the tuple tree. Next we will learn how to determine whether the processing of a single tuple is complete. This is achieved by the ACK and fail in the Outputcollector. For example, in the splitsentence example above, it has an ACK after sending all the words tuples.

We can quickly notify the spout tuple at the root node through the Fail method in Outputcollector. This way, we don't have to wait for the timeout to send a fail message.

Each tuple must have an ACK or fail. Storm uses memory to track each tuple, so if we do not make an ACK or fail for each tuple, the task that is responsible for tracking the tuple will run until the memory runs out.

Most bolts read the input tuple in a generic pattern, send a tuple, and ack the tuple at the end of the Execute method. This type of bolts is classified as a filter or a simple function. There is a basicbolt interface in storm that contains this pattern. Change the splitsentence to Basicbolt form:

public class Splitsentence extends Basebasicbolt {public
        void execute (tuple tuple, basicoutputcollector collector) {
            String sentence = tuple.getstring (0);
            For (String word:sentence.split (")") {
                collector.emit (new Values (word));
            }
        }

        public void Declareoutputfields (Outputfieldsdeclarer declarer) {
            Declarer.declare (new fields ("word"));
        }        
    }
This implementation is more concise than before. The tuples is automatically anchored in the basicoutputcollector with the input tuple (we do not need to fill in the tuple parameter). And when the Execute method completes, the input tuple is also automatically ack.

Conversely, when the bolts is aggregations (converged) or joins (connected), the ACK is postponed until the result is calculated on the basis of the bulk tuples. Aggregations and joins will have multiple anchors for their output tuples. These will be seen in the Ibasicbolt.

How to make the application run correctly (that is, avoid repeated calculations) when tuples is present

It depends on the situation ... Storm 0.7.0 describes the "transactional topologies" feature, which can be guaranteed to be sent only once for most computations (exactly-once messaging). However, it is now deprecated, and a framework called "Trident" is used instead. Introduced later.


how storm effectively achieves reliability

There are a number of special "Acker" in storm topology that track tuples in a dag for each spout tuple. When a Acker detects that a DAG is complete, it sends a message to confirm the task that generated the spout tuple. (Here, you will certainly have doubts, every time the tuple sent the ACK, why there are Acker.) Answer: Each time the ACK is told Acker this time the processing is finished. The Acker to be aggregated tells the root node that all the tuple processing is done). We can set the number of ACKER by Config.topology_acker_executors. Storm default Acker is the same number as the number of worker. --when we need to deal with a lot of messages, we may need to increase the amount of this Acker.

The best way to understand the reliability of storm is to study the life cycle of tuples and the tuple dag. When a tuple is generated in topology, either spout or bolt-generated tuple will be given a random ID of 64bit. These IDs are used by Ackers to track each spout tuple in this tuple dag.

Each tuple knows the ID of all spout tuples that are in the same DAG. When we send a new tuple, the ID of the spout tuple will be copied to the new tuple by anchoring this method. When the tuple is ACK, it sends a message to the corresponding Acker task telling its Tupel dag that the change is happening. Vernacular point: The tuple said: I have completed the calculation of the ID number of the spout tuple, and then these are anchored to my new tuple.

For example, if tuples D and e are generated on the basis of C, then the following is the change in the tuple Dag after C ack (the Red Fork Description is ACK):


Since C was Ack, D and E were added to the DAG, so the DAG was not fully processed.

Here are some details to explain, as mentioned above, you can set up any number of Acker tasks yourself. This can cause problems: when a tuple is ACK, how it knows which Acker task it should send messages to.

Storm uses the modulo hash to map the spout tuple ID to the corresponding Acker task. As I said before, all the tuples in the same dag with the spout tuple get the ID of the spout tuple through anchor, so they can send messages to the corresponding Acker task more spout the tuple ID.

Another detail is how the Acker task knows which spout task to send a message to.

When a new tuple is emitted by the spout task, it will have its own spout task ID to the Acker task that is responsible for this spout tuple. Therefore, the Acker task can determine the spout task ID to send a message to the corresponding spout task when the tuple is complete.

Acker tasks is not an explicit tracking of a tuple DAG (which makes it possible to run Acker task alone to consume memory), but rather a more ingenious way to spend approximately 20bytes per spout tuple that Acker task tracks. This tracking algorithm is the key to storm and a major breakthrough for Storm.

The Acker task stores a spout tuple ID to a value pair (value1,value2) mapping. Value1 is the spout task ID used to determine which spout task the Acker will send a message to. Value2 is a 64bit number called "Ack val" (ack variable). The ACK variable represents the state of the entire DAG, and the ACK variable is simply the same as the tuple ID that is passed in the DAG and when it is ack.

When the Acker task discovers that the value of the ACK variable is 0 o'clock, it means that Tupledag has been fully processed. Because the tuple ID is a random 64bit number, it is particularly small if the probability of a different number of differences or 0 is generated. Mathematically, a 10k ack per second can take up to 50 million years before this error condition occurs. And even if the above error occurred, it only caused a loss of data, if it happens that this processing is a failure.

Now let's look at how storm can avoid data loss in a variety of failure scenarios:

• Because the task died,tuple does not have an ACK: in this case, the processing will time out and the spout tuple will be re-sent

· Acker task dies: In this case, all spout tuple under this Acker task trace will be re-sent because of a time-out

· Spout task dies: In this case, the source of the Spout will be the message re-sent, for example: When the connection with the consumer client, Kestrel and RABBITMQ will be all the "pending" message back to normal.

As you can see, Storm's reliability mechanism is fully distributed, extensible, and fault-tolerant.


Adjust Storm's reliability

Acker tasks are lightweight, so we don't need to deploy too much in topology. We can monitor his performance through the Storm UI. If the traffic is not right, we may need to increase the amount of Acker task.

If reliability is not important to you, we can not trace the tuple. This will reduce the transmission volume of the message by half. In addition, the downstream tuple does not need to copy the spout tuple ID, which will also reduce bandwidth usage.

We have three ways to remove reliability. The first is to set Config.topology_ackers to 0, in which case storm will call Spout's Ack method after spout sends a tuple. Therefore, the DAG will not be tracked.

The second way is to set the message ID of the messages to null. By setting the message ID to NULL in the Spoutoutputcollector.emit parameter, you can turn off tracing for the current spout tuple.

The third way, as mentioned earlier, is that we can not anchor the downstream tuple (anchor).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.