Go Commons Pool Release and Golang multithreaded programming issues Summary

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

Taking advantage of New Year's Day holiday, finishing a bit of recent learning Golang, "translation" of a Golang universal object pool, put on GitHub Go Commons pool open source. The reason is called "translation", because the core algorithm and logic of the library is based on Apache Commons Pool, but the original Java "translated" into a golang.

A period of time before reading Kubernetes source code, the overall study of the next Golang, but the language of this kind of things, learned not to, a few weeks to forget almost. A Golang Practice group chat, someone asked Golang whether there is a universal object pool, search the next, seemingly no more complete. The current Golang pool has the following solutions:

    1. Sync. Pool
      Sync. Pool is very simple to use, just pass a func to create the object.

       var objPool = sync.Pool{ New: func() interface{} { return NewObject()}} p := objPool.Get().(*Object)

      But sync. Pool only solves the problem of object reuse, the object life cycle in the pool is between two GC, the object in the pool after GC is recycled, the consumer cannot control the object's life cycle, so it is not suitable for the connection pool and other scenarios.

    2. Using Container/list to implement a custom pool, such as Redigo, uses this approach. But most of these custom pool is not universal, and the function is not complete. For example, Redigo does not currently have a timeout mechanism to get a connection pool, see this issue Blocking with a timeout when get Pooledconn

The Java commons pool, the function is complete, the algorithm and logic has been verified, the use is more extensive, so the direct "translation" over, by the way practice Golang grammar.

As a common object pool, you need to include the following key features:

    1. The life cycle of an object can be precisely controlled by the pool supply mechanism allows the creation/destruction/validation logic of the user-defined object
    2. The survival number of the object can be precisely controlled by the pool providing a set of surviving quantities in a timely and long configuration
    3. Get object has time-out mechanism to avoid deadlock, easy to implement failover has encountered many online failures before, because of the connection pool settings or the implementation mechanism is defective.

The core of Apache Commons pool is based on Linkedblockingdeque,idle objects that are placed in deque. The reason for this is deque, not the queue, because it supports LIFO (last on, first out)/fifo the two strategies to get the object. Then there is a user-defined object containing all the objects, value is Pooledobject, used to verify the legality of the return object, the abandoned of the background timer, the number of active objects, etc. map,key. Timeouts are implemented through the wait timeout mechanism of the Java lock.

Here's a summary of the multithreading issues encountered when translating Java into Golang

Recursive lock or called reentrant lock (Recursive Lock)

The synchronized keywords in Java and the reentrantlock used in Linkedblockingdequeu are reentrant. And the Sync.mutex in the Golang is not reentrant. It shows that:

ReentrantLock lock;public void a(){    lock.lock();    //do some thing    lock.unlock();}public void b(){    lock.lock();    //do some thing    lock.unlock();}public void all(){    lock.lock();    //do some thing    a();    //do some thing    b();    //do some thing    lock.unlock();}

The method of nesting in the all method above is called a, although a lock is required when a method is called, but because all has requested a lock and the lock can be re-entered, it does not cause deadlocks. The same code can cause deadlocks in Golang:

var lock sync.Mutexfunc a() {    lock.Lock()    //do some thing    lock.Unlock()}func b() {    lock.Lock()    //do some thing    lock.Unlock()}func all() {    lock.Lock()    //do some thing    a()    //do some thing    b()    //do some thing    lock.Unlock()}

can only be refactored to the following (naming not specification please ignore, just demo)

var lock sync.Mutexfunc a() {    lock.Lock()    a1()    lock.Unlock()}func a1() {    //do some thing}func b() {    lock.Lock()    b1()    lock.Unlock()}func b1() {    //do some thing}func all() {    lock.Lock()    //do some thing    a1()    //do some thing    b1()    //do some thing    lock.Unlock()}

Golang's core developers think that reentrant locks are not a good design, so don't offer, see recursive (aka Reentrant) mutexes is a bad idea. So we need to pay more attention to nesting and recursive calls when we use locks.

Lock wait timeout mechanism

Golang's sync. Cond only Wait, there is no timeout waiting method such as condition in Java await (long time, Timeunit unit). This makes it impossible to implement Linkblockingdeque's Pollfirst (long timeout, Timeunit unit) method. Someone mentioned issue, but was rejected Sync:add Waittimeout method to Cond. Therefore, only through the channel mechanism to simulate a timeout waiting for the cond. Full source see Go-commons-pool/concurrent/cond.go.

type TimeoutCond struct {    L      sync.Locker    signal chan int}func NewTimeoutCond(l sync.Locker) *TimeoutCond {    cond := TimeoutCond{L: l, signal: make(chan int, 0)}    return &cond}/**return remain wait time, and is interrupt*/func (this *TimeoutCond) WaitWithTimeout(timeout time.Duration) (time.Duration, bool) {    //wait should unlock mutex,  if not will cause deadlock    this.L.Unlock()    defer this.L.Lock()    begin := time.Now().Nanosecond()    select {    case _, ok := <-this.signal:        end := time.Now().Nanosecond()        return time.Duration(end - begin), !ok    case <-time.After(timeout):        return 0, false    }}

Problems with the map mechanism

This problem is strictly not a multi-threaded problem. Although the Golang map is not thread-safe, it is easy to encapsulate it with a mutex. The key problem is that, as we mentioned earlier, the Map,key used to maintain all objects in the pool is a user-defined object, and value is pooledobject. And Golang's constraint on the key to Map is: go-spec#map_types

The comparison operators = = and! = must is fully defined for operands of the key type; Thus the key type must not be a function, map, or slice. If The key type is a interface type, these comparison operators must be defined for the dynamic key values; Failure would cause a run-time panic.

This means that key cannot contain values that cannot be compared, such as slice, map, and function. And our key is a user-defined object, there is no way to constrain. As a result of Java's identityhashmap idea, the key is converted to the object's pointer address, in fact, the map holds the key object's pointer address.

type SyncIdentityMap struct {    sync.RWMutex    m map[uintptr]interface{}}func (this *SyncIdentityMap) Get(key interface{}) interface{} {    this.RLock()    keyPtr := genKey(key)    value := this.m[keyPtr]    this.RUnlock()    return value}func genKey(key interface{}) uintptr {    keyValue := reflect.ValueOf(key)    return keyValue.Pointer()}

At the same time, the disadvantage of this is that the object stored in the pool must be a pointer, not a value object. Objects such as String,int cannot be saved to the pool.

Other off-topic on multithreading

Golang test-race parameters are very useful, through this parameter, found a few data race bug, see the Commit fix data race test error.

Go Commons Pool Follow-up work

    1. Continuing to refine the test case, the test case is now about half as much as 88% coverage. "Translation", the main code is relatively quick to write, but the test case is more cumbersome, multi-threaded debugging is more complex. The test case code for a generic base library is 2-3 times the core logic code.
    2. Do the next benchmark. There should be no problem with the core algorithm, it's all verified. However, there may be bottlenecks in the mechanism of using channel to simulate timeout. This piece should consider the timer's reuse mechanism. See Terry-mao/goim
    3. With the last two completed, you are ready to release an official version, which can be improved by the pool Redigo.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.