golang mgo的mongo串連池設定:必須手動加上maxPoolSize

來源:互聯網
上載者:User
這是一個建立於 的文章,其中的資訊可能已經有所發展或是發生改變。

熊貓TV的禮物系統使用了golang的 mongo庫 mgo,中間踩了一些坑,總結下避免大家再踩坑

golang的mgo庫說明裡是說明了開啟串連複用的,但觀察實驗發現,這並沒有根本實現串連的控制,串連複用其實僅在當前操作 (session.Close 之前 )生效,最終還是需要程式員自行去限制串連才行。

廢話不多說,開始上代碼

GlobalMgoSession, err := mgo.Dial(host) func (m *MongoBaseDao) Get(tablename string, id string, result interface{}) interface{} {    session := GlobalMgoSession.Clone()    defer session.Close()     collection := session.DB(globalMgoDbName).C(tablename)    err := collection.FindId(bson.ObjectIdHex(id)).One(result)     if err != nil {        logkit.Logger.Error("mongo_base method:Get " + err.Error())    }    return result}

 

golang main入口啟動時,我們會建立一個全域session,然後每次使用時clone session的資訊和串連,用於本次請求,使用後調用session.Close() 釋放串連。

// Clone works just like Copy, but also reuses the same socket as the original// session, in case it had already reserved one due to its consistency// guarantees.  This behavior ensures that writes performed in the old session// are necessarily observed when using the new session, as long as it was a// strong or monotonic session.  That said, it also means that long operations// may cause other goroutines using the original session to wait.func (s *Session) Clone() *Session {    s.m.Lock()    scopy := copySession(s, true)    s.m.Unlock()    return scopy}  // Close terminates the session.  It's a runtime error to use a session// after it has been closed.func (s *Session) Close() {    s.m.Lock()    if s.cluster_ != nil {        debugf("Closing session %p", s)        s.unsetSocket()  //釋放當前線程佔用的socket 置為nil        s.cluster_.Release()        s.cluster_ = nil    }    s.m.Unlock()}

Clone的方法注釋裡說明會重用原始session的socket串連,但是並發請求一大,其他協程來不及釋放串連,當前協程會怎麼辦?

func (s *Session) acquireSocket(slaveOk bool) (*mongoSocket, error) {    // Read-only lock to check for previously reserved socket.    s.m.RLock()    // If there is a slave socket reserved and its use is acceptable, take it as long    // as there isn't a master socket which would be preferred by the read preference mode.    if s.slaveSocket != nil && s.slaveOk && slaveOk && (s.masterSocket == nil || s.consistency != PrimaryPreferred && s.consistency != Monotonic) {        socket := s.slaveSocket        socket.Acquire()        s.m.RUnlock()        logkit.Logger.Info("sgp_test 1 acquireSocket slave is ok!")        return socket, nil    }    if s.masterSocket != nil {        socket := s.masterSocket        socket.Acquire()        s.m.RUnlock()        logkit.Logger.Info("sgp_test 1  acquireSocket master is ok!")        return socket, nil    }     s.m.RUnlock()     // No go.  We may have to request a new socket and change the session,    // so try again but with an exclusive lock now.    s.m.Lock()    defer s.m.Unlock()    if s.slaveSocket != nil && s.slaveOk && slaveOk && (s.masterSocket == nil || s.consistency != PrimaryPreferred && s.consistency != Monotonic) {        s.slaveSocket.Acquire()        logkit.Logger.Info("sgp_test 2  acquireSocket slave is ok!")        return s.slaveSocket, nil    }    if s.masterSocket != nil {        s.masterSocket.Acquire()        logkit.Logger.Info("sgp_test 2  acquireSocket master is ok!")        return s.masterSocket, nil    }     // Still not good.  We need a new socket.    sock, err := s.cluster().AcquireSocket(s.consistency, slaveOk && s.slaveOk, s.syncTimeout, s.sockTimeout, s.queryConfig.op.serverTags, s.poolLimit) ......    logkit.Logger.Info("sgp_test 3   acquireSocket cluster AcquireSocket is ok!")    return sock, nil }

在源碼中加debug,結果日誌說明一切:

Mar 25 09:46:40 dev02.com[12607]:  [info] sgp_test 1  acquireSocket master is ok!Mar 25 09:46:40 dev02.com[12607]:  [info] sgp_test 1  acquireSocket master is ok!Mar 25 09:46:41 dev02.com[12607]:  [info] sgp_test 1 acquireSocket slave is ok!Mar 25 09:46:41 dev02.com[12607]:  [info] sgp_test 3   acquireSocket cluster AcquireSocket is ok!Mar 25 09:46:41 dev02.com[12607]:  [info] sgp_test 3   acquireSocket cluster AcquireSocket is ok!Mar 25 09:46:41 dev02.com[12607]:  [info] sgp_test 3   acquireSocket cluster AcquireSocket is ok!

不斷的建立串連  AcquireSocket

 $  netstat -nat|grep -i 27017|wc -l

400

如果每個session 不調用close,會達到恐怖的4096,並堵死其他請求,所以clone或copy session時一定要defer close掉

啟用maxPoolLimit 參數則會限制總串連大小,串連到限制則當前協程會sleep等待  直到可以建立串連,高並發時鎖有問題,會導致多建立幾個串連

 
src/gopkg.in/mgo.v2/cluster.go     s, abended, err := server.AcquireSocket(poolLimit, socketTimeout)        if err == errPoolLimit {            if !warnedLimit {                warnedLimit = true                logkit.Logger.Error("sgp_test WARNING: Per-server connection limit reached. " + err.Error())                log("WARNING: Per-server connection limit reached.")            }            time.Sleep(100 * time.Millisecond)            continue        } session.go:// SetPoolLimit sets the maximum number of sockets in use in a single server  // before this session will block waiting for a socket to be available.  // The default limit is 4096.  //  // This limit must be set to cover more than any expected workload of the  // application. It is a bad practice and an unsupported use case to use the  // database driver to define the concurrency limit of an application. Prevent  // such concurrency "at the door" instead, by properly restricting the amount  // of used resources and number of goroutines before they are created.  func (s *Session) SetPoolLimit(limit int) {      s.m.Lock()      s.poolLimit = limit      s.m.Unlock()  }

串連池設定方法:

1、配置中 增加 

[host]:[port]?maxPoolSize=10

2、代碼中 :

dao.GlobalMgoSession.SetPoolLimit(10)

再做壓測:

 $  netstat -nat|grep -i 27017|wc -l

15

結論:

每次clone session之後,操作結束時如果調用 session.Close 則會unset Socket ,置nil, 所以socket複用,僅在當前session範圍內生效,所以非全域的session無法共用,每個協程請求到來都會建立socket串連,直到達到最大值4096,而mongo的串連數上限一般也就是1萬,也就是一個連接埠你只能啟動一個進程保證串連不被撐爆,過多的串連數用戶端效率不高,server端更會耗費記憶體和CPU,所以需要啟用自訂串連池 , 啟用串連池也需要注意如果有pooMaxLimit個協程執行過長或者死迴圈不釋放socket串連,也會悲劇。

mgo並沒有從底層實現socket的預先建立和整個生命週期的串連池複用,需要自行最佳化。

 

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.