Anatomy Go1.3 new features: Sync. Pool

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

Add a new feature in the Sync Pack for Go 1.3: Pool. Official documents can be seen here http://golang.org/pkg/sync/#Pool

The purpose of this class of design is to save and reuse temporary objects to reduce memory allocations and reduce CG pressure.

Type Pool    func (P *pool) Get () interface{}    func (P *pool) Put (x interface{})    New func () interface{}

Get returns any object in the pool. If the pool is empty, call new to return a newly created object. If new is not set, nil is returned.

Another important feature is that the objects that are placed in the pool will be recycled when you don't know when to go . So if you put in 100 objects in advance, it is possible to find the pool is empty the next time you get it. But one of the benefits of this feature is that you don't have to worry about the pool growing because go has helped you recycle the pool. I used the channel to achieve a similar interface to the pool, see this official version of decisively abandoned.

Let's talk about the pool implementation:

1. Scheduled cleanup

The document says that objects saved in the pool are automatically removed without notice. In fact, this cleanup process is done before each garbage collection. Garbage collection is triggered at a fixed two-minute time. And every cleanup will clean up all the objects in the pool! (I thought I would clean up part of the usage frequency before I read the source ...) So if the number of objects in the pool is a lot, it will slow down the garbage collection time.

var (allpoolsmu mutexallpools   []*pool) func poolcleanup () {//This function was called with the world stopped, at the Ginning of a garbage collection.//It must not allocate and probably should don't call any runtime functions.//defensively Zero out everything, 2 reasons://1. To prevent false retention of whole pools.//2. If GC happens while a goroutine works with l.shared in put/get,//    it would retain whole Pool. So next cycle memory consumption would is doubled.for I, p: = Range Allpools {Allpools[i] = nilfor I: = 0; i < int (P.lo Calsize); i++ {L: = indexlocal (p.local, i) L.private = nilfor J: = Range l.shared {L.shared[j] = NIL}L.A = Nil}}allpools = []*pool{}} Func init () {Runtime_registerpoolcleanup (Poolcleanup)}

There is a global variable Allpools saves all the created pool objects and registers a Poolcleanup function callback to runtime, which will be called before each garbage collection.

2. How to manage data

Let's take a look at two data structures

Type Pool struct {local     unsafe. Pointer//local fixed-size per-p pool, actual type is [p]poollocallocalsize uintptr        //size of the local array//New Optionally specifies a function to generate//a value when Get would otherwise return nil.//It could not be changed Concurr Ently with calls to Get.new func () interface{}}//Local per-p Pool appendix.type poollocal struct {private interface{}
  
   //can used only by the respective p.shared  []interface{}//Can is used by any P.mutex                 //Protects Shared.pad     [128]byte     //prevents false sharing.}
  
Pool is an object that is provided for external use. The true type of the local member is a poollocal array, and the localsize is the array length. Poollocal is where the data is actually stored. Priveate holds a temporary object, and shared is an array of temporary objects that are saved.

Why do we need so many poollocal objects in the pool? In fact, pool is assigning a poollocal object to each thread. This means that the length of the local array is the number of worker threads (Size: = Runtime. Gomaxprocs (0)). When multithreading is read and written concurrently, it is often the case that the data is accessed in the poollocal of its own thread. When there is no data in the poollocal of the thread, the lock is attempted to "steal" the data in the poollocal of the other thread.

Func (P *pool) Get () interface{} {if raceenabled {if p.new! = Nil {return p.new ()}return nil}l: = P.pin ()  //Get PO for current thread Ollocal object, i.e. P.local[pid]. x: = L.privatel.private = Nilruntime_procunpin () if x! = nil {return X}l.lock () Last: = Len (l.shared) – 1if last >= 0 {x = L.shared[last]l.shared = L.shared[:last]}l.unlock () if x! = nil {return X}return p.getslow ()}
Pool.get, the Poollocal object corresponding to the current thread is first fetched in the local array. If there is data in private, it is taken out and returned directly. If not, lock the GKFX first, and the data will be returned directly.

Why are you locked up here? The answer is in the Getslow. Because when there is no data in gkfx, it tries to steal the data from the other poollocal shared.

Func (P *pool) Getslow () (x interface{}) {//See the comment in pin regarding ordering of the loads.size: = Atomic. Loaduintptr (&p.localsize)//load-acquirelocal: = p.local                         //load-consume//Try to steal one element from the other PR Ocs.pid: = Runtime_procpin () runtime_procunpin () for I: = 0; i < int (size); i++ {L: = indexlocal (local, (pid+i+1)%int (size)) L.lock () Last: = Len (l.shared)-1if last >= 0 {x = L.shared[last]l.shar ed = L.shared[:last]l.unlock () break}l.unlock ()}if x = Nil && p.new! = Nil {x = p.new ()}return x}
While the goroutine of the Go language can be created a lot, the number of goroutine that really can be physically concurrently running is limited and is run by runtime. Gomaxprocs (0) is set. So the efficient design of this pool is that the data is scattered in the real concurrent threads, each thread takes the data from its own poollocal, which greatly reduces the lock competition.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.