Implementing a token bucket based on multiple goroutine

Source: Internet
Author: User
Tags deprecated
This is a creation in Article, where the information may have evolved or changed.

Objective

A token bucket is a common control algorithm for controlling the rate of flow. The principle is described in Wikipedia as follows:

    • The R tokens are placed in the bucket every second, i.e. a token is placed in the bucket every 1/r seconds.

    • A bucket can hold a maximum of B tokens. When a token is placed in a bucket, the token is discarded if the bucket is full.

    • When an n-byte packet arrives, it consumes n tokens and then releases it.

    • If the token in the bucket is less than n, the packet is either cached or discarded.

Here we use the Go language, based on the above description, to implement a concurrency-safe token bucket on a multi-goroutine basis. The following code of the complete implementation of the warehouse address in: https://github.com/DavidCai19 ....

Basic design

The most basic structure is to define a token bucket struct, each of the newly generated token bucket instances, each with a goroutine that, like a daemon, puts tokens in the instance bucket at a fixed time:

type TokenBucket struct {    interval          time.Duration  // 时间间隔    ticker            *time.Ticker   // 定时器 timer  // ...    cap               int64          // 桶总容量    avail             int64          // 桶内现有令牌数}func (tb *TokenBucket) adjustDaemon() {    for now := range tb.ticker.C {        var _ = now        if tb.avail < tb.cap {            tb.avail++        }    }}func New(interval time.Duration, cap int64) *TokenBucket {    tb := &TokenBucket{    // ...    }    go tb.adjustDaemon()    return tb}

The struct will eventually provide the following APIs:

    • TryTake(count int64) bool: Try to remove a token from the bucket n . Returns immediately and returns a value indicating whether the fetch was successful.

    • Take(count int64): Try to remove a token from the bucket, and n if the number of tokens in the current bucket is low, wait until the number of tokens in the bucket is met and then removed.

    • TakeMaxDuration(count int64, max time.Duration) bool: Try to remove a token from the bucket, and n if the number of tokens in the current bucket is low, wait until the number of tokens in the bucket is met and then removed. However, a timeout is set max , and if it times out, no longer waits for an immediate return, and the return value indicates whether the fetch was successful.

    • Wait(count int64): Keep waiting until the number of tokens in the bucket is greater than or equal n .

    • WaitMaxDuration(count int64, max time.Duration) boolKeep waiting until the number of tokens in the bucket n is greater than or equal, but set a time max -out.

TryTake: Try out once

TryTake(count int64) boolThis one-time removal attempt, can be returned, the most simple to achieve. The only problem that needs attention is that we are currently in a multi-goroutine environment, the token is our shared resources, in order to prevent competition conditions, the simplest solution is to access all plus lock . The Go language comes with sync.Mutex classes that provide a lock implementation.

type TokenBucket struct {  // ...    tokenMutex        *sync.Mutex // 令牌锁}func (tb *TokenBucket) tryTake(count int64) bool {    tb.tokenMutex.Lock() // 检查共享资源,加锁    defer tb.tokenMutex.Unlock()    if count <= tb.avail {        tb.avail -= count        return true    }    return false}func (tb *TokenBucket) adjustDaemon() {    for now := range tb.ticker.C {        var _ = now    tb.tokenMutex.Lock() // 检查共享资源,加锁        if tb.avail < tb.cap {            tb.avail++        }    tb.tokenMutex.Unlock()    }}

Take, TakeMaxDuration wait-type out (try)

For Take(count int64) TakeMaxDuration(count int64, max time.Duration) bool Such a wait-type removal (try), the situation is different:

    1. Since both operations are required to wait for notification, the original active lock check shared resources scheme is no longer appropriate.

    2. Because there may be multiple pending operations, in order to avoid confusion, we need to have a first served, the first to wait for the operation, first get the token.

We can use the Go language to provide the second way to share resources between multiple Goroutine: Channel to solve the first problem. The channel can be bidirectional and exactly match the scenario where we need passive notification. In the face of the second problem, we need to maintain a queue for the pending operation. What we're using here is list.List to simulate the FIFO queue, but it's worth noting that the queue itself is a shared resource, and we need to get a lock for it.

Following the above ideas, we will first realize Take(count int64) :

Type tokenbucket struct {//... Waitingququemutex: &sync. mutex{},//wait until the operational queue is waitingquque:list.  New (),//queue lock}type waitingjob struct {ch chan struct{} count Int64}func (TB *tokenbucket) Take (count Int64) {w: = &waitingjob{ch:make (chan struct{}), Count:count,} tb.addwaitingjob (W)//    Put W into a queue, and you need to lock the queues. <-w.ch Close (w.ch)}func (TB *tokenbucket) Adjustdaemon () {var waitingjobnow *waitingjob for now: = Range tb.tic Ker.        C {var _ = Now Tb.tokenMutex.Lock ()//check shared resources, lock if Tb.avail < Tb.cap {tb.avail++    } element: = Tb.getfrontwaitingjob ()//Take out the queue header and need to lock the queue. if element! = Nil {if Waitingjobnow = = Nil {Waitingjobnow = element. Value.            (*waitingjob) Tb.removewaitingjob (Element)//Remove queue header, need to lock the queue. } if Tb.avail >= waitingjobnow.need {tb.avail-= Waitingjobnow.count waitingjobnow.ch <-struct{  }{}      Waitingjobnow = nil}} tb.tokenMutex.Unlock ()}} 

Then we implement takemaxduration (count int64, Max time. Duration) bool , the time-out part of the operation, we can use the "Go" with the select keyword combined with the timer channel. and an identity field for waitingjob to indicate whether the operation timed out has been deprecated. Because checking for deprecated operations occurs in Adjustdaemon , the identity deprecation operation is in select within takemaxduration . To avoid a competitive state again, we will use the action of the token from within Adjustdaemon through the channel to return to Select , and block, to avoid the race condition and enjoy the protection of the token lock:

func (tb *TokenBucket) TakeMaxDuration(count int64, max time.Duration) bool {    w := &waitingJob{        ch:        make(chan struct{}),        count:     count,        abandoned: false, // 超时弃置标识    }    defer close(w.ch)    tb.addWaitingJob(w)    select {    case <-w.ch:        tb.avail -= use        w.ch <- struct{}{}        return true    case <-time.After(max):        w.abandoned = true        return false    }}func (tb *TokenBucket) adjustDaemon() {    // ...    if element != nil {            if waitingJobNow == nil || waitingJobNow.abandoned {                waitingJobNow = element.Value.(*waitingJob)                tb.removeWaitingJob(element)            }            if tb.avail >= waitingJobNow.need && !waitingJobNow.abandoned {                waitingJobNow.ch <- struct{}{}                <-waitingJobNow.ch                waitingJobNow = nil            }        }    // ...}

At last

Finally, summarize some key points:

    • For access to shared resources, either use locks or use channel, depending on the scenario, choose the best use.

    • Channel can passively wait for shared resources, while locks are easy to use.

    • Asynchronous multiple wait operations, which can be reconciled using a queue.

    • Can be under the protection of the lock, combined with channel to the sharing of resources to achieve a processing pipeline, combined with the advantages of both, very useful.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.