This is a creation in Article, where the information may have evolved or changed. In the previous post, "Using buffer channel to realize the distribution and waiting of online game account verification message " Reference to the use of buffer channels to achieve packet distribution and waiting, and gave a prototype implementation. However, there is a disadvantage of the buffer channel, that is, only a certain number of goroutine can be allowed to use the
sendandreceive function while waiting for the message distribution; if there are more goroutine to wait for the message, You must wait for other goroutine to get the message and release the channel before you can send the packet and wait for the response. This deficiency limits the throughput of the system at high concurrency. in order to solve this problem, this paper provides an adaptive channel allocator implementation as a solution. Because a channel can also be replaced by other resources, it is considered essentially a resource allocator. The principle of this adaptive resource allocator is: 1. Pre-allocating a certain amount of resources to the buffer channel (
buffer pool ) so that resources can be quickly obtained when requesting resources 2. Dynamically allocate resources if the buffer pool is empty when requesting resources 3. When recovering resources, put the resources directly into the buffer pool; If the buffer pool is filled with an
alternate buffer pool, the standby buffer pool is an array, so the action of putting resources into the standby buffer pool can be completed immediately without waiting indefinitely. 4. The resource allocator itself has a goroutine running and moves the resource into the buffer pool if a resource is detected for the standby buffer pool. 5. When the number of concurrency is getting higher, the resources used in pre-allocation (that is, resources in the buffer pool and the standby buffer pool) are increasing, thus achieving some degree of self-adaptability. 6. Because the channel is used as a means of communication, the operation to request resources and release resources is supported concurrently.
The code implements a resource allocator
Pool, the resource-allocation device uses
Newpoolfunction creation, the application and release of resources can be individually
AllocAnd
Freefunction is complete. The specific code looks like this:
The resource here is Chan []byte, so the type of buffer pool is Chan Chan []bytetype pool struct {chch Chan chan []byte back Chan Chan []byte exi T Chan bool}func newpool (count int) *pool {p: = new (Pool) P.back = make (Chan Chan []byte, count) P.chch = Make (c Han Chan []byte, Count] for I: = 0; I < count; i++ {p.chch <-make (chan []byte, 1)} P.exit = Make (chan bool) go P.run () return P}func (P *pool) A Lloc () Chan []byte {select {case ch: = <-p.chch:return ch Default:break} return make ( Chan []byte, 1)}func (P *pool) free (ch Chan []byte] {select {case p.chch <-ch:return default: P.back <-ch}}func (P *pool) Close () {if p.exit! = nil {Close (p.exit) P.exit = Nil}}func ( P *pool) run () {var chs []chan []byte var chch chan Chan []byte var next chan []byte for {select { Case <-p.exit:return Case CH: = <-p.back:if ChCh = = Nil {chch = p.chch next = ch} else {CHS = append (CHS, CH) } case ChCh <-next:if len (CHS) = = 0 {ChCh = nil next = Nil } else {next = Chs[len (CHS)-1] CHS = Chs[:len (CHS)-1]}}}