How to use Redis connection pooling in the Go language-radix.v2

Source: Internet
Author: User
Tags redis server
This is a creation in Article, where the information may have evolved or changed.


First, about the connection pool

A database server has only limited resources, and if you do not fully use these resources, you can increase throughput by using more connections. Once all the resources are in use, you cannot increase the throughput by adding more connections. In fact, throughput starts to drop when the connection load is large. You can typically increase latency and throughput by limiting the number of database connections that match the available resources.

If we don't use connection pooling, we need to create a connection, send and receive data, and close the connection each time we transfer data. In the case of low concurrency, there is basically no problem, once the concurrency is up, then you will generally encounter the following common problems:

    • Performance is generally not going on
    • CPU resources are consumed by the system
    • Once the network jitter, there will be a large number of time_wait generated, have to periodically restart the service or periodically restart the machine
    • The server is unstable, the QPS is high and low

To solve these problems, we need to use the connection pool. The idea of connection pooling is simple, when initializing, create a certain number of connections, first save all long connections, and then, who need to use, take away from here, finished work immediately put back. If the number of requests exceeds the pool capacity, then the queue waits, degrades to a short connection, or is discarded directly.

Second, use the connection pool encountered pit

Recently in a project, you need to implement a simple Web Server that provides the Redis HTTP interface, which provides the return result in JSON form. Consider using Go to achieve.

First, take a look at Redis's officially recommended go Redis driver. There are two official Star projects: Radix.v2 and Redigo. After a simple comparison, a more lightweight and more elegant radix.v2 is chosen.

The RADIX.V2 package is divided into sub packages according to the function, each sub pack in a separate subdirectory, the structure is very clear. The sub package that I use in my project has Redis and pool.

I chose to implement a Redis pool without going into the radix.v2 pool, since I wanted to make this fork process a little bit easier and do something single. (Here, no code is posted.) Later found that the Redis pool I realized and the radix.v2 implementation of the Redis pool principle is the same, are based on the channel implementation, encountered the same problem. )

However, during the testing process, a strange problem was found. EOF errors are often reported during the request. And it's probabilistic, a problem, and a good one. Through repeated testing, found that the bug is regular, when the program is idle for a while, then make a continuous request, there will be 3 failures, and then the request can be successful, and my connection pool size is set to 3. Further analysis, the program idle 300 seconds later, the request will fail, I found that my Redis server is configured with timeout 300, the problem is clear. The connection timed out Redis server was actively disconnected. The client side will get an EOF error from a timed-out connection request.

Then I looked at the source of the Radix.v2 pool package and found that the library itself did not detect bad connections and replace them with a new connection mechanism. That is, every time I get a connection from the connection pool, there may be a bad connection. So, my temporary solution was solved by adding an automatic retry after the failure. However, such a treatment scheme, the connection pool function as if there is no. Technical debt can be earlier or earlier.

Third, the correct posture using the connection pool

Think of our Ngx_lua project also a large number of use Redis connection pool, how they did not encounter this problem. Only to see the source.

After abstraction, the code for using the Redis connection pool portion of Ngx_lua is roughly the same:

server {location/pool {content_by_lua_block {LocalRedis =require "Resty.redis"            LocalRed = Redis:New()LocalOK, err = Red:connect ("127.0.0.1",6379)if  notOk ThenNgx.say ("Failed to connect:", err)return            EndOK, err = red:Set("Hello","World")if  notOk Then                return            EndRed:set_keepalive (10000, -)        }    }}

Found a set_keepalive method, check the official document, the prototype of the method is Syntax:ok, err = red:set_keepalive (max_idle_timeout, pool_size) looks like Max_idle_ Timeout This parameter, is what we lack of things, and then further tracking the source code to see how the inside to ensure that the connection is effective.

function_m.set_keepalive (Self,...) Local sock = Self.sockifNot sock ThenreturnNil"not initialized"EndifSelf.subscribed ThenreturnNil"subscribed State"EndreturnSock:setkeepalive (...) End

At this point, it is clear that the keepalive heartbeat mechanism of TCP is used.

So, by discussing with the author of Radix.v2, I chose to use the heartbeat mechanism in the REDIS layer to solve this problem.

Iv. The Final Solution

After the connection pool is created, a goroutine is sent, and a PING to Redis server is idleTime every other interval. Where IdleTime is slightly smaller than the timeout configuration of Redis server.
The connection pool initialization part of the code is as follows:

p, err : = Pool.new  ( "TCP ", U.host, concurrency) ERRHNDLR (err ) go Func () {for  {p.cmd ( "PING" ) time . Sleep (Ideltime * time . second )}} () 

The section code for transferring data using Redis is as follows:

Func redisdo (P *pool. Pool, cmdstring, args ... interface{}) (Reply *redis. RESP,Err Error) {reply = P.cmd (Cmd, args ...)if Err= reply.ERR;Err! = Nil {if Err! = Io. EOF {fatal.println ("Redis", cmd, args,"Err is",Err)}}} return}

Where the RADIX.V2 connection pool is internally connected within the pool to get and put back, the code is as follows:

CMD automatically gets OneClient from  thePool, executes theGivenCommand//(returning its result), and puts the "client back" in the poolFunc (P *pool) cmd (cmdstring, args ... interface{}) *redis. RESP {c, err: = P.get ()ifErr! = Nil {returnRedis. NEWRESP (ERR)} defer P.put (c)returnC.cmd (CMD, args ...)}

In this way, we have a keepalive mechanism, there is no timeout connection, and the connections taken from the Redis connection pool are available connections. Seemingly simple code, but the perfect solution to the connection pool inside the timeout connection problem. At the same time, even if the Redis server restarts, the connection can be automatically re-connected.

This article is a technical contribution from another pat on the cloud. (Zebian/Chang)


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.