This is a creation in Article, where the information may have evolved or changed.
Overview:
Why connection pooling is required
Connection failure issues
Connection pooling in the Database/sql
Using connection pooling to manage thrift links
The following mainly uses Golang as the programming language
Why connection pooling is required
I think one of the biggest benefits of using connection pooling is to reduce the creation and shutdown of connections, increase system load capacity ,
Before, there is a problem: the number of TCP time_wait connections caused the service is not available, because the database connection pool is not opened, coupled with the large MySQL concurrency, resulting in the need to frequently create links, resulting in tens of thousands of time_wait TCP links, affecting system performance.
The function of the link pool is mainly to manage a bunch of links, including create and close, so oneself on the basis of Fatih/pool, modified a bit: Https://github.com/silenceper/pool, make more general some, add some function points as follows:
The connection object is not just net.Conn , it becomes interface{} (the pool stores the format you want)
Increased maximum idle time for links (ensures that links become stale when the connection is idle for too long)
Mainly used to manage the channel connection, and can be very good use of the pipeline sequence, when the need to use Get a connection, when the use is completed Put and put back channel in.
Connection failure issues
Using a connection pool is no longer a short connection, but a long connection, causing some problems:
1, long time idle, connection disconnect?
Because the network environment is complex, the middle may be due to firewalls and other reasons, resulting in a long idle connection will be broken, so there are two ways to solve:
The client increases the heartbeat and periodically sends a request to the server
Increase the maximum idle time for connections in the connection pool, and the time-out connection is no longer used
In Https://github.com/silenceper/pool, a parameter of this maximum idle time is added, which resets when the connection is created or the connection is returned to the connection pool, adding a connection creation time to each connection. Compare time when taking out: https://github.com/silenceper/pool/blob/master/channel.go#L85
2. When the service end multiplicity, the connection fails?
The remote server is likely to restart, and the previously created link will be invalidated. The client needs to judge these failed connections and discard them when they are in use, and in database/sql this case the failed connections are judged, using this error representationvar ErrBadConn = errors.New("driver: bad connection")
It is also worth mentioning that database/sql this error is ErrBadConn retried, the default number of retries is two times, so that even if the link is invalid or disconnected, the request will be able to respond normally (continue to see is analyzed).
Characteristics of the connection failure
Connection pooling in the Database/sql
The database/sql use of connection pooling in in is simple, mainly involving the following configurations:
db.SetMaxIdleConns(10) //连接池中最大空闲连接数 db.SetMaxOpenConns(20) //打开的最大连接数 db.SetConnMaxLifetime(300*time.Second)//连接的最大空闲时间(可选)
Note: If it is MaxIdleConns greater than 0 and MaxOpenConns less than, then it will be MaxIdleConns set MaxIdleConns toMaxIdleConns
Take a look at the structure of the DB and the description of the field:
type DB struct { //具体的数据库实现的interface{}, //例如https://github.com/go-sql-driver/mysql 就注册并并实现了driver.Open方法,主要是在里面实现了一些鉴权的操作 driver driver.Driver //dsn连接 dsn string //在prepared statement中用到 numClosed uint64 mu sync.Mutex // protects following fields //可使用的空闲的链接 freeConn []*driverConn //用来传递连接请求的管道 connRequests []chan connRequest //当前打开的连接数 numOpen int //当需要创建新的链接的时候,往这个管道中发送一个struct数据, //因为在Open数据库的就启用了一个goroutine执行connectionOpener方法读取管道中的数据 openerCh chan struct{} //数据库是否已经被关闭 closed bool //用来保证锁被正确的关闭 dep map[finalCloser]depSet //stacktrace of last conn's put; debug only lastPut map[*driverConn]string //最大空闲连接 maxIdle int //最大打开的连接 maxOpen int //连接的最大空闲时间 maxLifetime time.Duration //定时清理空闲连接的管道 cleanerCh chan struct{}}
See an example of querying a database:
rows, err := db.Query("select * from table1")
The calling db.Query method is as follows:
func (db *DB) Query(query string, args ...interface{}) (*Rows, error) { var rows *Rows var err error //这里就做了对失效的链接的重试操作 for i := 0; i < maxBadConnRetries; i++ { rows, err = db.query(query, args, cachedOrNewConn) if err != driver.ErrBadConn { break } } if err == driver.ErrBadConn { return db.query(query, args, alwaysNewConn) } return rows, err}
Under what circumstances it will be returned, as you can see from here:
Readpack,writepack
Just keep on coming in.
func (db *DB) conn(strategy connReuseStrategy) (*driverConn, error) {
The main method is to create a TCP connection and determine the lifetime of the connection lifetime, as well as some restrictions on the number of connections, if the maximum number of open links over the set limit waits for connRequest a connection to be generated in the pipeline (the putConn data is written to this pipeline when the link is released)
When do I release a link?
When we call, we put the link that we rows.Close() are currently using back freeConn or write to the db.connRequests pipeline.
//putConnDBLocked 方法 //如果有db.connRequests有在等待连接的话,就把当前连接给它用 if c := len(db.connRequests); c > 0 { req := db.connRequests[0] // This copy is O(n) but in practice faster than a linked list. // TODO: consider compacting it down less often and // moving the base instead? copy(db.connRequests, db.connRequests[1:]) db.connRequests = db.connRequests[:c-1] if err == nil { dc.inUse = true } req <- connRequest{ conn: dc, err: err, } return true } else if err == nil && !db.closed && db.maxIdleConnsLocked() > len(db.freeConn) { //没人需要我这个链接,我就把他重新返回`freeConn`连接池中 db.freeConn = append(db.freeConn, dc) db.startCleanerLocked() return true }
Using connection pooling to manage thrift links
Here is the use of connection pooling Https://github.com/silenceper/pool, how to build a thrift link
The client creates the thrift code:
Type Client struct {*user. userclient}//method for creating thrift client Links Factory: = Func () (interface{}, error) {protocolfactory: = thrift. Newtbinaryprotocolfactorydefault () Transportfactory: = Thrift. Newttransportfactory () var transport thrift. Ttransport var err error transport, err = thrift. Newtsocket (Rpcconfig.listen) if err! = Nil {panic (err)} transport = Transportfactory.gettransport (TRANSP ORT)//defer transport. Close () If err: = transport. Open (); Err! = Nil {panic (err)} rpcclient: = user. Newuserclientfactory (transport, protocolfactory)//Place Client object directly in connection pool return &client{userclient:rpcclient}, nil} Method of closing the connection close: = Func (v interface{}) error {v. (*client). Transport.close () return nil}//creates an initialization connection of poolconfig: = &pool. poolconfig{initialcap:10, Maxcap:20, Factory:factory, Close:close, idletimeout:300 * ti Me. Second,}p, err: = Pool. Newchannelpool (poolconfig) if err! = Nil {panic (err)}//fetchTo link conn, err: = P.get () if err! = Nil {return nil, err}v, OK: = conn. (*client) ... Use the connection to invoke the remote method//To re-place the connection back into the connection pool p. PUT (conn)
Pool connection Pooling code address: Https://github.com/silenceper ...
Original address: http://silenceper.com/blog/201611/%E8%81%8A%E8%81%8Atcp%E8%BF%9E%E6%8E%A5%E6%B1%A0/