In general, memory pools are pre-allocated, divided into fixed-size and non-fixed-size blocks, fixed-size memory efficient, non-fixed size flexible. At the same time, divided into single-threaded and multi-threaded version, single-threaded no need to consider concurrency issues.
General memory Pool Implementation idea: Allocate a large amount of memory, the memory is divided into equal size blocks, that is, fixed size, the first block to save the necessary information, such as Nfirst (the first block can be allocated to the block), nsize (how much allocated), Nfree (can allocate the size of the block), Pnext (If the memory pool is not enough, allocate a piece of Growth,pnext refers to the next piece), p (Save the first allocated memory block address), but also need to poolmanage to unify management. The first two bytes of each memory block record the address of the next assignable block of memory, because it is a fixed size, so the address can be calculated based on p and the first few blocks. The benefit of the first two bytes allocation is that the memory is reusable after allocation, and note that the first two bytes are also required to record the next allocated memory block address when returning to the memory pool.
This is the memory pool idea, well, actually today's topic is not a memory pool, but another method of memory management, according to the size of the block to allocate:
Type bys_i struct{EList.ElementTInt64}TypeByteslice struct{PBytepoolSize_IntLs_ *List.ListLs_m_map[interface{}]*bys_i ls_l sync. rwmutex zero_ *list. element} type bytepool struct {//max int64 T int64//timeout when GC beg int End Span class= "Hljs-title" >int ms_ map[ Int]*byteslice ms_l sync.rwmutex}
According to the size of the block to maintain, such as Map[8]*byteslice,map[1024]*byteslice, at the same time, with the list to do garbage collection processing, each element is set access time t, if the GC, T (current)-T (Element) >t (GC condition), Delete from map and remove from list. This principle is similar to the session, is to use a map to store, with the list to do garbage collection, because the map reading speed, and list insertion and other operations more flexible.
Go code:
Func Newbyteslice (P *bytepool, size int) *byteslice {ls_: = List. New () Zero_: = Ls_. Pushback ([]byte{}) return &byteslice{p:p, size_: Size, Ls_: Ls_, Zero_: Zero_, Ls_m_: map[interface{}]*bys_i{},}}func (b *byteslice) Alloc () []byte {b. ls_l. Lock () Defer b. ls_l. Unlock () var bys []byte TV: = b. Ls_. Front () If TV = = b. zero_ {bys = make ([]byte, B. Size_)//Add count++ FMT. Printf ("Tv==b.zero_:%v,count:%d\n", &bys[0], count)//end TV = b. Ls_. Pushback (bys) b. ls_m_[&bys[0]] = &bys_i{E:TV, t:util. Now (),}} else {b. Ls_. Movetoback (TV) Bys = TV. Value. ([]byte) b. ls_m_[&bys[0]]. T = Util. Now () FMT. Printf ("Movetoback:%v,%d,count:%d\n", &bys[0], Util. Now (), Count)} return Bys}func (b *byteslice) free (bys []byte) {b. ls_l. Lock () Defer b. ls_l. Unlock () If TV, OK: = b. ls_m_[&bys[0]]; OK {TV. T = Util. Now () b. Ls_. Movetofront (TV. E)}}func (b *byteslice) Size () Int64 {//b. ls_l. Lock ()//defer B. ls_l. Unlock () return Int64 (b. Ls_. Len ()-1) * Int64 (b. Size_)}func (b *byteslice) GC () (int, int64) {b. ls_l. Lock () Defer b. ls_l. Unlock () tn: = Util. Now () var rc int =0 for {TV: = B. Ls_. Front () If TV = = B. zero_ {FMT. PRINTF ("GC tv==b.zero_\n") Break } bys: = TV. Value. ([]byte) rv: = &bys[0] if (tn-b. Ls_m_[rv]. T) > B. P. T {FMT. Printf ("gc:%v\n", RV) b. Ls_. REMOVE (TV) Delete (b. ls_m_, RV) rc++}} return RC, B. Size ()}