Guava Ratelimiter Source Resolution _ Current Limit & downgrade

Source: Internet
Author: User
Tags current time mutex semaphore

https://segmentfault.com/a/1190000012875897


Objective

When developing high concurrency systems, there are three sharp objects to protect the system: the purpose of caching, demotion, and cache caching is to increase system access speed and to increase system processing capacity degraded demotion is a temporary shield when a service problem or core process is affected, and then the current limit will be opened when the peak or problem is resolved The purpose of the current limit is to protect the system by limiting the speed of concurrent access/request, or by limiting the speed of a request within a time window, and when the limit rate is reached, it can reject the service, queue or wait, demote, etc. common current limiting algorithm

There are two types of current limiting algorithms: Leaky bucket algorithm and token bucket algorithm idea is very simple, water (request) first into the leaky bucket, leaking bucket to a certain speed of water, when the inflow speed over the General Assembly directly overflow, you can see that the leaky bucket algorithm can forcibly limit the transmission rate of data.

For many scenarios, in addition to the need to be able to limit the average transmission rate of data, it is also required to allow some degree of burst transmission. This time the leaky bucket algorithm may not be appropriate, the token bucket algorithm is more suitable. The principle of the token bucket algorithm is that the system puts a token into the bucket at a constant speed, and if the request needs to be processed, a token must be obtained from the bucket first, and the service is refused when the bucket has no token.

More about the leaky bucket algorithm and token bucket algorithm can refer to Http://blog.csdn.net/charlesl ... Signal Volume

Operating system semaphore is a very important concept, Java concurrent library semaphore can easily complete semaphore control, Semaphore can control a resource can be access to the number of simultaneous, through acquire () to obtain a license, if not to wait, and release () Release a license.

The essence of semaphore is to control the number of the resources can be accessed at the same time, to some extent, can control the access frequency of a resource, but not precise control.

@Test
Fun semaphoretest () {
    val semaphore = semaphore (2)

    (1..10). Map {
        thread (true) {
            Semaphore.acquire ()

            println ("$it \t${date ()}")
            Thread.Sleep (1000)

            semaphore.release ()
        }
    . ForEach {it.join ()}
}

The above example creates a semaphore, specifies a concurrency number of 2, and its output is as follows

1   Wed, 10:31:49 CST 2018
2 Wed of the   10:31:49 CST 2018
3   Wed, 10:31:50 CST 2018
  
   4   Wed 10:31:50 CST 2018 5 Wed of the 10:31:51 CST 2018 6 Wed before-10:31:51   CST 2018
   7   Wed 10:31:52 CST 2018 8 Wed of the 10:31:52 CST 2018 9 Wed before-10:31:53   CST 2018
   10  Wed 10:31:53 CST 2018
  

It can be clearly seen that at the same time there can be at most 2 threads for output.
Although the semaphore can control the access frequency of the resources to some extent, it cannot be controlled precisely. Ratelimiter

Google Open Source Toolkit Guava provides a limited-flow tool class Ratelimiter, which is based on token bucket algorithm to achieve traffic restrictions, the use of very convenient.

@Test
Fun ratelimitertest () {
    val ratelimiter = ratelimiter.create (0.5)

    arrayof (1,6,2). ForEach {
        println ("${system.currenttimemillis ()} Acq $it: \twait ${ratelimiter.acquire (IT)}s")
    }

The above example creates a ratelimiter that specifies 0.5 tokens per second (2 seconds for 1 tokens), and the output is shown below

1516166482561 Acq 1:wait 0.0s
1516166482563 acq 6:wait 1.997664s
1516166484569 Acq 2:wait 11.991958s

From the output of the results can be seen, Ratelimiter has the ability to advance consumption:
Acq 1 O'Clock there's no waiting for direct 1 tokens to be consumed
Acq 6 o'clock, as a result of 1 tokens previously consumed, it waited 2 seconds, and then consumed 6 tokens
Acq 2 o'clock for the same reason, we waited 12 seconds for 6 tokens previously consumed.

On the other hand, Ratelimiter supports a certain level of burst request (pre-consumption) by limiting the wait time for subsequent requests.
But in some cases, this burst request processing capability is not required, if an IM vendor provides a message push interface, but the push interface has a strict frequency limit (600 times/30 seconds), in the call to the IM vendor push interface can not be consumed, otherwise, there may be a push frequency exceeding the limit and failure. The handling of this situation will be introduced in other posting. Source Code Interpretation

Guava has two limiting modes, one for stable mode (Smoothbursty: constant for token generation), and one for progressive mode (Smoothwarmingup: Token generation slowly increases until it remains at a stable value)
Two modes of implementation of similar ideas, the main difference in the calculation of the waiting time, the focus of this article Smoothbursty

When the Create interface is invoked, the Smoothbursty class is actually instantiated

public static Ratelimiter Create (double permitspersecond) {return
    Create (Permitspersecond), Sleepingstopwatch.createfromsystemtimer ());
}

Static Ratelimiter Create (double Permitspersecond, Sleepingstopwatch stopwatch) {
    Ratelimiter ratelimiter = new Smoothbursty (Stopwatch, 1.0 * * maxburstseconds);
    Ratelimiter.setrate (Permitspersecond);
    return ratelimiter;
}

Before parsing the smoothbursty principle, it is important to explain the meaning of several attributes in the Smoothbursty

/**
 * The currently stored permits.
 * Current number of Storage tokens
 *
/double storedpermits;

/**
 * The maximum number of stored permits.
 * Maximum number of storage tokens * *
double maxpermits;

/**
 * The interval between two unit requests in our stable rate. e.g., a stable rate of 5 permits
 * Per second has a stable interval of 200ms.
 * Add token time interval *
 *
double Stableintervalmicros;

/** * The time at the
 next request (no matter its size) would be granted. After granting a request,
 * It is pushed further in the future. Large requests push this further than small requests.
 * The next request can get the start of the token
 * Since the ratelimiter allows for pre consumption, the last request for a pre-consumption token
 * The next request needs to wait for the appropriate time to Nextfreeticketmicros time before you can get the token
 * /
Private Long Nextfreeticketmicros = 0L;//could be either in the past or future

Next, we'll introduce a few key functions

/**
 * Updates {@code storedpermits} and {@code Nextfreeticketmicros} based on the current time.
 *
/void Resync (Long Nowmicros) {
    //If Nextfreeticket is in the past, Resync to now
    if (Nowmicros > Nextfre Eticketmicros) {
      double newpermits = (Nowmicros-nextfreeticketmicros)/Cooldownintervalmicros ();
      storedpermits = min (maxpermits, storedpermits + newpermits);
      Nextfreeticketmicros = Nowmicros;
    }

According to the token bucket algorithm, the token in the bucket is continuously generated, and there is a request to get the token from the bucket before it can be executed, and who will continue to generate the token store.

One solution is to turn on a timed task to generate a token continuously from a timed task. The problem is that the system resources will be greatly consumed, such as an interface needs to do each user access frequency limit, assuming that the system has 6W users, the most need to turn on 6W scheduled task to maintain the number of tokens in each barrel, such overhead is enormous.

The other solution is deferred computation, such as the Resync function. This function is invoked before each fetch token, and the idea is that, if the current time is later than Nextfreeticketmicros, calculate how many tokens can be generated during that time, add the generated token to the token bucket, and update the data. As a result, you only need to calculate one time when you get the token.

Final long reserveearliestavailable (int requiredpermits, long Nowmicros) {
  resync (Nowmicros);
  Long returnvalue = Nextfreeticketmicros; Returns the last computed Nextfreeticketmicros
  double storedpermitstospend = min (requiredpermits, this.storedpermits); Number of tokens that can be consumed
  double freshpermits = requiredpermits-storedpermitstospend;//number of tokens required
  long Waitmicros =
      Storedpermitstowaittime (This.storedpermits, Storedpermitstospend)
          + (Long) (Freshpermits * Stableintervalmicros ); Calculate the time to wait according to freshpermits

  This.nextfreeticketmicros = Longmath.saturatedadd (Nextfreeticketmicros, WaitMicros) ; The Nextfreeticketmicros of this calculation do not return
  this.storedpermits-= storedpermitstospend;
  return returnvalue;
}

This function is used to get the Requiredpermits token and return the point of time to wait
Among them, the storedpermitstospend is the number of tokens that can be consumed in the bucket, freshpermits the number of tokens that are needed (which need to be supplemented), calculate the time to wait according to the value, append and update to Nextfreeticketmicros

It should be noted that the return of the function is before the update (last requested calculation) Nextfreeticketmicros, rather than the update of the Nextfreeticketmicros, in layman's terms, this request to the last request for the pre-consumption behavior of the bill, This is also the principle that ratelimiter can consume (deal with burst). If you need to prohibit pre consumption, modify this to return the updated Nextfreeticketmicros value.

Look back. Smoothbursty constructors

Smoothbursty (Sleepingstopwatch stopwatch, double maxburstseconds) {
  super (stopwatch);
  This.maxburstseconds = Maxburstseconds; Maximum storage maxburstseconds seconds generated token
}

@Override
void Dosetrate (double permitspersecond, double Stableintervalmicros) {
  double oldmaxpermits = this.maxpermits;
  Maxpermits = Maxburstseconds * Permitspersecond; Calculate maximum number of storage tokens
  if (oldmaxpermits = = double.positive_infinity) {
    //If we don ' t special-case this, we would get store Dpermits = = NaN, below
    storedpermits = maxpermits;
  } else {
    storedpermits =
        (oldmaxpermits = 0.0) c13/>? 0.0//Initial state
            : Storedpermits * maxpermits/oldmaxpermits;
  }

The maximum number of tokens that can be stored in the bucket is computed by maxburstseconds, meaning that the maximum storage maxburstseconds seconds generated tokens.
The function of this parameter is to control the flow more flexibly. For example, some interfaces are limited to 300 times/20 seconds, some interfaces are limited to 50 times/45 seconds, and so on.

After understanding the above concepts, it is very easy to understand Ratelimiter exposed interface

@CanIgnoreReturnValue public
double acquire () {return
  acquire (1);
}

@CanIgnoreReturnValue public
double acquire (int permits) {
  Long microstowait = reserve (permits);
  Stopwatch.sleepmicrosuninterruptibly (microstowait);
  return 1.0 * Microstowait/seconds.tomicros (1L);
}

Final long reserve (int permits) {
  checkpermits (permits);
  Synchronized (Mutex ()) {return
    reserveandgetwaitlength (permits, Stopwatch.readmicros ());
  }

The acquire function is used primarily to get the permits token and calculate how long it will take to wait, and then return the value

 public boolean tryacquire (int permits) {return Tryacquire (permits, 0, microseconds);}

public Boolean Tryacquire () {return Tryacquire (1, 0, microseconds);} public boolean tryacquire (int permits, long timeout, timeunit unit) {Long Timeoutmicros = max (Unit.tomicros (timeout), 0
  );
  Checkpermits (permits);
  Long microstowait;
    Synchronized (Mutex ()) {Long Nowmicros = Stopwatch.readmicros ();
    if (!canacquire (Nowmicros, Timeoutmicros)) {return false;
    else {microstowait = Reserveandgetwaitlength (permits, Nowmicros);
  } stopwatch.sleepmicrosuninterruptibly (microstowait);
return true; Private Boolean Canacquire (Long Nowmicros, long Timeoutmicros) {return queryearliestavailable (Nowmicros)-TIMEOUTMI
Cros <= Nowmicros; @Override Final Long queryearliestavailable (long Nowmicros) {return nextfreeticketmicros;} 

The Tryacquire function can try to get a token within the timeout time and, if it can, suspend waiting for the appropriate time and return True, or return false immediately
Canacquire is used to determine whether a token can be acquired within timeout time

At this point, guava ratelimiter principle and use of the introduction, Smoothwarmingup interested in children's shoes can be consulted on the document or source code.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.