Golang Redis Connection Pool

Source: Internet
Author: User
Tags fpm pconnect sprintf nginx reverse proxy
This is a creation in Article, where the information may have evolved or changed.

Recently by the log is tossing to die, writing documents is undoubtedly the highest efficiency, but distributed and become a problem, although a little toss to cooperate with NFS, or can engage in, but always language design is not so convenient.

Finally decided to use Redis, for Redis thought it would be good, because the memory run, who knows the TCP connection overhead is a mess, the server load suddenly a lot higher, using Netstat-an view found a bunch of time_wait, even SSH to the server is very slow, The so-called World Martial arts only fast not broken, so slow 80-year-old lady to jump a square dance can give out.

Since the TCP connection cost is so large, of course, the first task is to solve the connection problem, obviously a request for a connection is very unreliable, it is better to write the log directly to the hard disk, of course, the first paragraph of the written log also said, do not support distributed, business allocation is not so good.

So, can we first use a connection, which is obviously not possible, a connection of multiple PHP-FPM mutual pinch will also cause bottlenecks, if it is a php-fpm a connection? Obviously this is desirable, so using a redis long connection with PHP, the Php-fpm.ini configuration is as follows:

PM = Static Pm.max_children = Pm.max_requests = 10240

PHP uses pconnect instead of Connect

$redis = new Redis (), $redis->pconnect (' 192.168.0.2 ', 6379); Intranet server $redis->lpush (' list ', ' Just a test ');

The above cooperation means that the default is to open 400 php-fpm to handle the Nginx reverse proxy over the request, a PHP-FPM processing 10,240 requests, that is, processing 10,240 requests only need to connect Redis once, Obviously for a request to connect a redis cost less than n times, the pressure measurement effect as described above, from the original processing thousands of requests per second suddenly up to 1W multiple requests.

The pressure test instructions are also affixed, in fact, there are many people do not know

Ab-n 100000-c 200 URL to be measured//Concurrent 200 client requests 100,000 times the URL to be measured

Of course, to verify that Nginx is really only open 400 php-fpm, you can use the following code to verify

$pid = Getmypid (); Touch ("pids/". $pid);

Execute the above test instructions to see if the PIDs directory generated 400 files with the current PHP-FPM process number name, of course, not just 400, because there are one or two manager process it, hehe.

The pressure measurement effect is very ideal, that is not the problem to solve it, of course, PHP is not as good as you think, for example, MySQL long connection by default is 8 hours, Redis did not know, we will when he also 8 hours, that 8 hours a PHP-FPM processing 10,240 requests must have been processed already, The long connection is not closed, and can not be reused, over time, the number of connections is not only increased, will one day memory overflow?

One might say, I measured how long the 10,240 requests, the Redis long connection time-out setting close to the number is good, the problem is the network environment which has imagined so beautiful, if the network bad cause 10,240 requests have not finished processing the connection timeout? Join the network too good to handle several rounds of 10,240 requests connected more and more?

But the Redis is good fortunately he is a single process single-threaded IO multiplexing, so the connection has not been closed and there will be no big impact, MySQL is multi-threaded, thread open more, the problem is definitely more serious.

That there is no way to manage these long connections, so that he never shut down, run out to another new PHP-FPM, the answer is the connection pool.

Copy the principle online:

Connection pooling The basic idea is that when the system is initialized, the database connection is stored as an object in memory, and when the user needs to access the database, instead of establishing a new connection, it pulls out an established idle connection object from the connection pool. After use, the user does not close the connection, but instead puts the connection back into the connection pool for use by the next request. Connection pooling is managed by the connection pool itself, while connections are established and disconnected. You can also control the number of initial connections in the connection pool, the upper and lower limits of connections, the maximum number of uses per connection, maximum idle time, and so on, by setting parameters for the connection pool. It is also possible to monitor the number, usage, etc. of database connections through its own management mechanism.

According to the above idea design a connection pool obviously to PHP is a great challenge, because the memory is not shared between php-fpm, so I chose the go language to do this, the reason is very simple, go writing and PHP as simple, Java too complex configuration environment is very troublesome, and finally abandoned by me, Can not say abandon it, too food to use.

go Connect Redis There are many libraries, finally chose Github.com/garyburd/redigo/redis, and then refer to the article on the Internet to write a connection pool example:

package mainimport  (     "Net/http"      "Runtime"       "io"      "FMT"      "Log"      "Time"       "Github.com/garyburd/redigo/redis")//  Connection Pool size Var max_pool_size = 20var  redispoll chan redis. Connfunc putredis (Conn redis. Conn)  {    //  based on the principle of mutual distrust between functions and interfaces, let's make this a good habit again.     if  Redispoll == nil {        redispoll = make ( Chan redis. conn, max_pool_size)     }    if len (redisPoll)  >=  max_pool_size {        conn. Close ()         return    }     Redispoll <- conn}func initredis (network, address string)  redis. conn {    //  buffering mechanism, equivalent to Message Queuing     if len (redisPoll)  ==  0 {        //  if the length is 0, a redis is defined. Conn type length is Max_pool_size Channel        redispoll = make (chan  redis. conn, max_pool_size)         go func ()  {             for i := 0; i <  max_pool_size/2; i++ {                 c, err := redis. Dial (network, address)                  if err != nil {                     panic(ERR)                 }                 putredis (c)              }         }  ()     }    return <-redisPoll}func  Redisserver (w http. Responsewriter, r *http. Request)  {    starttime := time. Now ()     c := initredis ("TCP",  "192.168.0.237:6379")      dbkey :=  "Netgame:info"     if ok, err := redis. Bool (C.do ("Lpush", dbkey,  "Yanetao"));  ok {    } else {         log. Print (Err)     }    msg := fmt. SprinTF ("Spents:%s",  time. Now (). Sub (StartTime));     io. WriteString (w, msg+ "\ n");} Func main ()  {    //  using CPU multicore to handle HTTP requests, this does not use go default is the single core processing HTTP, this pressure has been measured, please be sure to trust me     runtime. Gomaxprocs (runtime. NUMCPU ());     http. Handlefunc ("/",  redisserver);     http. Listenandserve (": 9527",  nil);}

Above seems to realize the connection pool, the actual pressure measurement effect is not ideal, not only the number of requests low, also reported wrong.

The bottom of the request is because the connection cost is not resolved, above is the first time to open a go association to connect 20/2 is 10 times Redis, and then put into the channel inside, is actually the queue, the channel is the message queue, and then when the 10 connection is requested to run out, Then generate 10, in fact, the number of TCP connections a few, how many requests on the number of connections, just to move the connection time slice, moved to just 10 times after the unfortunate egg connection.

Error is because I intranet test, the request is too fast, the first wave 10 connection is not connected, the second wave came again, yes, another big wave of zombies, and then a few waves of Goroutine began to choke, resulting in a wrong connection

The effect of connection pooling is still not reached.

The concept of connection pooling is that, for example, 40 are the default connections, so all requests come in with these 40 connections.

When 40 connections are exhausted, they are either queued or increased by a few to deal with.

These 40 connections and the newly generated connection, if generated more than 5, then the 45 connections will not be closed, and then put back to the pool, in fact, the queue, waiting for other requests to use him, to achieve the effect of connection multiplexing.

That is to say, these connections must be long connected, always open continuously.

above is obviously a short connection, I tried to put the global variable, after each connection processing six or seven requests, disconnected, the program prompts the connection is not available, go how to achieve long connection effect, finally in the foreigner's article found the implementation of this library, the code is as follows:

package mainimport  (     "Net/http"      "Runtime"       "io"      "FMT"      "Log"      "Time"       "Github.com/garyburd/redigo/redis")//  Connection Pool size Var max_pool_size = 20var  redispoll chan redis. Connfunc putredis (Conn redis. Conn)  {    //  based on the principle of mutual distrust between functions and interfaces, let's make this a good habit again.     if  Redispoll == nil {        redispoll = make ( Chan redis. conn, max_pool_size)     }    if len (redisPoll)  >=  max_pool_size {        conn. Close ()         return    }     Redispoll <- conn}func initredis (network, address string)  redis. conn {    //  buffering mechanism, equivalent to Message Queuing     if len (redisPoll)  ==  0 {        //  if the length is 0, a redis is defined. Conn type length is Max_pool_size Channel        redispoll = make (chan  redis. conn, max_pool_size)         go func ()  {             for i := 0; i <  max_pool_size/2; i++ {                 c, err := redis. Dial (network, address)                  if err != nil {                     panic(ERR)                 }                 putredis (c)              }         }  ()     }    return <-redisPoll}func  Redisserver (w http. Responsewriter, r *http. Request)  {    starttime := time. Now ()     c := initredis ("TCP",  "192.168.0.237:6379")      dbkey :=  "Netgame:info"     if ok, err := redis. Bool (C.do ("Lpush", dbkey,  "Yanetao"));  ok {    } else {         log. Print (Err)     }    msg := fmt. SprinTF ("Spents:%s",  time. Now (). Sub (StartTime));     io. WriteString (w, msg+ "\ n");} Func main ()  {    //  using CPU multicore to handle HTTP requests, this does not use go default is the single core processing HTTP, this pressure has been measured, please be sure to trust me     runtime. Gomaxprocs (runtime. NUMCPU ());     http. Handlefunc ("/",  redisserver);     http. Listenandserve (": 9527",  nil);}

Pressure measurement, the above code to REDIS inside plug data with just the output string HelloWorld to the client almost as fast, check the Redis inside Netgame:info This queue data, a data is not lost, this is what I really want to effect ah, well, A Go+redis log system is really going to be born, wow haha.

Reference:

A foreigner asked to put the connection to the global variable is uncertain, of course, the short link will time out

Official source of realization

Connection Pooling Concepts


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.