The leaky bucket algorithm and the token bucket algorithm in the high concurrent system limit flow, and the stability by the flow shaping and rate limiting to improve __ algorithm

Source: Internet
Author: User
Tags message queue semaphore

In large data high concurrent access, there is often a service or interface in the face of the explosion of the request is not available, or even triggered a chain reaction caused the entire system crashes. At this point you need to use one of the technical means to limit the flow, when the request reached a certain number of concurrent or rate, it is waiting, queuing, demotion, denial of service and so on. In the current limit, the two common algorithms are leaky bucket and token bucket algorithm algorithm, this article is the focus of the relevant content introduction.


The concept of leaky bucket and token bucket algorithm

leaky bucket algorithm (leaky Bucket): The main purpose is to control the rate of data injection into the network, smoothing the burst traffic on the network. The leaky bucket algorithm provides a mechanism through which burst traffic can be shaped to provide a stable flow of the network. The schematic diagram of the leaky bucket algorithm is as follows:


Request first into the leaky bucket, leaking bucket at a certain speed of water, when the waters requested the General Assembly directly overflow, you can see that the leaky bucket algorithm can forcibly limit the transmission rate of data.


token bucket algorithm (Token Bucket): is one of the most commonly used algorithms in network traffic shaping (traffic shaping) and rate limiting (Rate limiting). Typically, the token bucket algorithm is used to control the number of data sent to the network and to allow the sending of burst data. The token bucket algorithm diagram is as follows:


The fixed size of the token barrel can be generated at a constant rate of the token. If the token is not consumed, or is consumed faster than the resulting speed, the token will continue to increase until the bucket is filled. A subsequent token will overflow from the bucket. The maximum number of tokens that can be saved in the last bucket never exceeds the size of the bucket.


The difference of two or two algorithms

The main difference is that the "leaky bucket algorithm" can forcibly limit the transmission rate of data, and "token bucket algorithm" can limit the average transmission rate of data, but also allow a certain degree of burst transmission. In the token bucket algorithm, as long as tokens are present in the token bucket, it is appropriate for traffic with burst characteristics to transmit data in a burst stream until the threshold of user configuration is reached.


Third, the use of guava ratelimiter for current-limiting control

Guava is a Java Extension class library provided by Google, where the current-limiting tool class Ratelimiter uses a token bucket algorithm. Ratelimiter Conceptually, the rate limiter allocates licenses at a configurable rate, and if necessary, each acquire () blocks the current thread until the license is available, and once a license is obtained, no more licenses are required. Popular speaking Ratelimiter will be in a certain frequency to the bucket to throw the token, the thread to get the token to execute, such as you want your application QPS not more than 1000, then Ratelimiter set 1000 of the rate, will be thrown into the bucket 1000 tokens per second. For example, we need to work with a list of tasks, but we do not want to submit more than two tasks per second, at this point you can use the following methods:


It is important to note that the requested number of licenses never affects the limit of the request itself (calling acquire (1) and calling acquire (1000) will have the same limiting effect if such a call exists), but it will affect the next request's restrictions, that is, If a high cost task arrives at an idle ratelimiter, it will be granted immediately, but the next request will experience additional restrictions to pay for high cost tasks. Note: Ratelimiter does not provide a fair guarantee.


Four, use Semphore to carry on the concurrent flow control

The semaphore of the Java concurrency Library can easily accomplish semaphore control, Semaphore can control the number of simultaneous accesses to a resource, obtain a license through acquire (), and if not, wait, and release () releases a license. The semaphore object of a single semaphore can implement a mutex function, and it can be "locked" by one thread and released by another thread, which can be applied to some situations of deadlock recovery. The demo below states that there are only 5 licensed semaphore, while 20 threads access this resource, obtaining and releasing access permissions through Acquire () and release ():




finally , there are many ways to limit current control, for different scenarios, for example, through the Atomiclong counter control, using MQ Message queue for traffic elimination peak, and so on.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.