https://segmentfault.com/a/1190000012947169
Business background
The system needs to butt an IM vendor rest interface and push messages to the client (as well as other IM services)
The vendor has a frequency limit on rest interface calls: Total rest calls 9,000 times/30s; message Push 600 times/30s
The system is distributed cluster, which needs to control the total interface invocation frequency of the whole distributed cluster to meet the above limitations guava Ratelimiter
The article "Guava Ratelimiter Source Analysis" introduced the guava Ratelimiter usage and principle, but why not directly use guava ratelimiter. There are two reasons: Guava Ratelimiter can only be applied to single process, multi-process cooperative control will be powerless guava Ratelimiter can handle burst requests (pre-consumption), where the rest interface call frequency limit is fixed, do not need to use the pre-consumption ability, Failure to do so will cause the interface invocation to fail Redis
Why it is reasonable to choose Redis. Redis efficiency, easy to expand redis to language-independent, can better access to different languages developed systems (heterogeneous) Redis Single process single-threaded characteristics can better solve the final consistency, multi-process cooperative control more easily based on Redis implementation Ratelimiter
Here completely refer to guava Ratelimiter implementation ideas, the difference is that the guava will be the bucket data stored in the object (memory), where the token bucket data stored in the Redis, the source code https://github.com/manerfan/m ...
First create a token bucket data model
class Redispermits (Val Maxpermits:long, Var Storedpermits:long, Val Intervalmillis:long, var nextfreeticketmillis:long = System.currenttimemillis ()) {constructor (permitspersecond:double, MaxBurst Seconds:int = Nextfreeticketmillis:long = System.currenttimemillis ()): This (Permitspersecond * maxBurs tseconds). Tolong (), Permitspersecond.tolong (), (TimeUnit.SECONDS.toMillis (1)/Permitspersecond). Tolong (), Nextfreeticketmillis) Fun expires (): Long {val now = System.currenttimemillis () return 2 * timeunit. Minutes.toseconds (1) + TimeUnit.MILLISECONDS.toSeconds (max (Nextfreeticketmillis, now)-now)} Fun ReSync (Now:lo NG): Boolean {if (now > Nextfreeticketmillis) {storedpermits = min (maxpermits, Storedpermits + (n
Ow-nextfreeticketmillis)/intervalmillis) Nextfreeticketmillis = Now return True} return false}}
Each property field has the same meaning as guava (see "Guava Ratelimiter Source Code Resolution"), and the default maximum storage maxburstseconds seconds generated token
The Resync function is also to solve the token bucket data update problem, which is called before each fetch token, which is not much described here
The Expires function calculates the Redis data expiration time. The same example, an interface to each user to do the access frequency limit, assuming that there are 6W users in the system, the most need to create 6W data in the Redis, for a long-running system, this number will only increase, This is also a challenge for Redis (although the numbers in the example are relatively small). In order to reduce the redis pressure, the need for the token bucket data expiration processing, for the use of frequency is not very high business scene, you can clean up in time.
For better action, create a Redis template that operates redispermits
@Configuration
class Ratelimiterconfiguration {
@Bean
fun permitstemplate (redisconnectionfactory: Redisconnectionfactory): permitstemplate {
val template = permitstemplate ()
template.connectionfactory = Redisconnectionfactory return
template
}
}
class permitstemplate:redistemplate<string, Redispermits> () {
private Val objectmapper = Jacksonobjectmapper ()
init {
Keyserializer = Stringredisserializer ()
ValueSerializer = object:redisserializer<redispermits> {
override fun Serialize (T:redispermits) = Objectmapper.writevalueasbytes (t)
override fun Deserialize (bytes:bytearray?) = Bytes? Let {objectmapper.readvalue (it, Redispermits::class.java)}}}
Here are a few key functions, complete code See https://github.com/manerfan/m ...
/**
* Generates and stores the default token bucket
*
/Private Fun Putdefaultpermits (): redispermits {
val permits = redispermits ( Permitspersecond, Maxburstseconds)
Permitstemplate.opsforvalue (). Set (key, permits, permits.expires (), Timeunit.seconds) return
permits
}
/**
* Get/Update token bucket *
*
private var permits:redispermits Get
() = Permitstemplate.opsforvalue () [key]?: Putdefaultpermits ()
set (permits) = Permitstemplate.opsforvalue (). Set (key, permits, Permits.expires (), timeunit.seconds)
/**
* Get Redis server time
* *
private Val Now get () = Permitstemplate.execute {it.time ()}?: System.currenttimemillis ()
Putdefaultpermits is used to generate the default token bucket and deposit in Redis
Permits Getter Setter Method realizes the acquisition and update of the token bucket in Redis
Now is used to get the time of the Redis server, which ensures the consistency of data processing among the nodes in the distributed cluster
Private Fun Reserveandgetwaitlength (Tokens:long): Long {
val n = Now
var permit = permits
permit.resync (n)
val storedpermitstospend = min (tokens, permit.storedpermits)//number of tokens that can be consumed
val freshpermits = tokens-storedpermits Tospend//Number of tokens to wait for
val waitmillis = freshpermits * Permit.intervalmillis//time
to wait Permit.nextfreeticketmillis = Longmath.saturatedadd (Permit.nextfreeticketmillis, WaitMillis)
Permit.storedpermits-= storedpermitstospend
permits = permit return
permit.nextfreeticketmillis-n
}
This function is used to get tokens tokens and return the duration (milliseconds) to wait
Among them, the storedpermitstospend is the number of tokens that can be consumed in the bucket, freshpermits the number of tokens that are needed (which need to be supplemented), calculate the time to wait according to the value, append and update to Nextfreeticketmillis
Note that, unlike guava Ratelimiter, the return in guava is the previous (last requested) Nextfreeticketmicros, and the request is processed by paying for the last-requested pre-consumption behavior. And here is the time to wait, because there is not enough token in the bucket to be true.
Popular speaking guava for overborrowed, this request needs for the last request of the pre-consumption behavior here for self-reliance, who consumption who pay, for their own behavior is responsible for
Private Fun Reserve (Tokens:long): Long {
checktokens (tokens)
try {
synclock.lock ()
return Reserveandgetwaitlength (tokens)
} finally {
synclock.unlock ()
}
}
This function is equivalent to reserveandgetwaitlength, only to avoid concurrency problems with the addition of synchronous locks (distributed synchronization lock Introduction See "Based on Redis distributed lock Implementation")
Private Fun Queryearliestavailable (Tokens:long): Long {
val n = Now
var permit = permits
permit.resync (n)
val storedpermitstospend = min (tokens, permit.storedpermits)//number of tokens that can be consumed
val freshpermits = Tokens-storedpermi Tstospend//Number of tokens to wait for
val waitmillis = freshpermits * Permit.intervalmillis//Time to wait return
Longmath.saturatedadd (Permit.nextfreeticketmillis-n, Waitmillis)
}
This function is used to calculate, get the duration (milliseconds) that tokens the token to wait for
Private Fun Canacquire (Tokens:long, Timeoutmillis:long): Boolean {return
queryearliestavailable (tokens)- Timeoutmillis <= 0
}
This function is used to calculate whether a tokens token can be obtained within timeoutmillis time
Through the understanding of the above several functions, we can easily realize the acquire and tryacquire function in guava ratelimiter
Fun Acquire (Tokens:long): Long {
var millitowait = reserve (Tokens)
Logger.info ("acquire for {}ms {}", Millitowai T, Thread.CurrentThread (). Name)
Thread.Sleep (millitowait) return
millitowait
}
fun Acquire () = Acquire (1)
Fun Tryacquire (Tokens:long, Timeout:long, unit:timeunit): Boolean {
val timeoutmicros = max (Unit.tomillis (timeout) , 0)
checktokens (tokens)
var millitowait:long
try {
synclock.lock ()
if!canacquire (tokens, Timeoutmicros)) {return
false
} else {
millitowait = reserveandgetwaitlength (tokens)
}
}} finally {
synclock.unlock ()
}
thread.sleep (millitowait) return
true
}
fun Tryacquire (Timeout:long, unit:timeunit) = Tryacquire (1, timeout, unit)
Review issues
Thus, the distributed Ratelimiter (current limiting) control function based on Redis is completed
Back to the questions raised at the beginning of the document, and then to the IM vendor rest interface, we can create different ratelimiter for different frequency limits
Val restratelimiter = Ratelimiterfactory.build ("Ratelimiter:im:rest", 9000/30,)
val msgratelimiter = Ratelimiterfactory.build ("Ratelimiter:im:msg", 600/30, 30)
When pushing messages, you can invoke the following
Restratelimiter.acquire ()
msgratelimiter.acquire (msgs.size)
Msgutil.push (msgs)
For interface providers to limit the access frequency of interfaces, you can implement the following
Val msgratelimiter = Ratelimiterfactory.build ("Ratelimiter:im:msg", 600/30,)
fun receivemsg (msgs:array< message>): Boolean {return when
(Msgratelimiter.tryacquire (msgs.size, 2, timeunit.seconds)) {
true-> { C6/>thread (True) {msgutil.receive (msgs)}
true
}
else-> false
}