Interpretation of high throughput rates and high latency for batch processing

Source: Internet
Author: User

In many systems, users are allowed to set a single message processing mode or batch mode. For example, in storm, the user can write through the core and Trident two APIs, the difference being that the former is a tuple that is processed in a tuple, and that the latter is a batch that consists of multiple tuples, and then one batch is processed by batch.

Because of the differences in the two processing modes, the performance of both is different, such as throughput and latency. The following is a typical test result, details: Https://github.com/ptgoetz/storm-benchmark.

Test environment: 5 nodes on Aws,1zookeeper,1nimbus,3supervisor (general configuration)

Test program: Memory copy

Throughput Rate (TUPLE/SEC)

Delay (MS)

Storm Core API

Approx. 150k

Approx. 80

Storm Trident API

Approx. 300k

Approx. 250

As you can see from the results, the throughput rate has risen 1 time times compared to the core, and the latency has risen by about twice times, Trident. In the actual development process, the general core API with a relatively more land.

The above is just a special case of "high throughput rates often accompanied by high latencies", and I am explaining this phenomenon theoretically and abstractly (not for a particular technique).

A batch is made up of multiple tuples, so the processing of multiple tuple and batch batches can be represented in an abstract manner:

In general, a batch processing time is longer than processing a tuple, but not more than a tuple processing time of n times, this is because the processing system generally for batch processing to do a lot of optimization, improve its processing efficiency. Similarly, the time interval between batch is larger than the time interval between the tuple, but less than n times the time interval between the tuple. As a result, a batch delay is greater than a single tuple delay, but the throughput rate increases, i.e. "high throughput is often accompanied by high latency".

Based on the above analysis, the results of the test are not difficult to understand, and this test result is only a special case of the general law.

Interpretation of high throughput rates and high latency for batch processing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.