This is a creation in Article, where the information may have evolved or changed.
Lei
It's a little long, just look at the conclusion.
Cluster function
The second note only realizes the Redis protocol single-machine forwarding, this time to realize the complete cluster function, involves the following points:
1. Code Logic Module Division: Server, cluster topology, backend connection pool, session management
2. Pipeline implementation, for each request package Sequence, strict guarantee order of response (some speculation, after the article)
3. Parsing of back-end ERRORRESP, special handling of MOVED and ask requests and updating cluster topology asynchronously
4. Performance, forever the topic, through the Pprof step by step to adjust
Module partitioning
Server layer: This layer is used to resolve the build global configuration, initialize other modules, turn on the listening port, and receive external access requests. Where filter is used to filter the received Redis protocol data, detect whether a dangerous forbidden or unsupported command, and roughly detect the number of command parameters, in the form of an interface.
Type Proxy struct {
L NET. Listener//Monitor Listener
Filter Filter//Redis Effective Protocol detection filters
PC *proxyconfig//Global configuration file
SM *sessmana//Session Management
Cluster *cluster//cluster implementation
}
Type Filter Interface {
Inspect (RESP) (String, error)
}
Cluster topology: Randomly selects Redis nodes to generate a logical topology based on Cluster Nodes output information. The default Reload topology information per 10min periodically, forcing Reload whenever Reloadchan receives data.
E32929A56D00A28934669D8E473F68C5DE84ABCE 10.10.200.11:6479 myself,master-0 0 0 Connected 0-5461
Type topology struct {
Conf *proxyconfig//Global configuration
RW sync. Rwmutex//read/write lock
Slots []*slot//Cluster Slot logical Topology
Reloadchan Chan int//Reload message Channel
}
Topology's most important function, returning the corresponding back-end Redis node information based on the given Key. Key parse out the hash tag according to the CRC16 algorithm generated and 16384, the Session to get the Node ID from the connection pool to get the connection.
Getnodeid
Session Management: Each client connection is encapsulated into a session,server layer that maintains Session management, closes the timeout connection, and defaults to 30s
Type Sessmana struct {
L Sync. Mutex//Session Lock
Pool map[string]*session//Session Map
Idle time. Duration//Super Time
}
Sessmana implementation simple, three ways: Add, delete and periodically check Idle connection
Func (SM *sessmana) Put (remote string, s *session) {
}
Func (SM *sessmana) Del (remote string, s *session) {
}
Func (SM *sessmana) Checkidleloop () {
}
Connection pooling: Initially wanted to write, found a lot of details unexpectedly, directly using Golang Redis Driver Connection Pool
Type Pool Interface {
First () Conn
Get () (Conn, error)
Put (Conn) error
Remove (Conn) error
Len () int
Freelen () int
Close () Error
}
Above is the connection pool interface, seemingly simple, specific code see POOL.GO, there are a few details need to think carefully:
1. In order to implement a common connection pool, callers need to pass in custom Dialer and define the Conn interface for easy extension.
2. Flow control problems, such as the normal timeout period, the number of open connections can not exceed a certain number of times. This is achieved using Ratelimit. Think of the past in the market, Cai mentioned Rob dog food problem.
3. Connection pool maintenance is a problem with the validity of connections, using lastused timeouts, or using Ping to handle. The intranet is always assumed to be stable, so the lastused problem is small.
4. If you are using lastused timeout detection, then the connection pool internal detection interval must be shorter than the back-end Redis Idle timeout time.
Pipeline
For processes that do not support Pipeline: Redis-> Proxy, client, proxy, client. So there are two layers can support Pipeline, the first layer from the client-to proxy, this layer is very simple, open Channel receive requests, proxy to block processing requests, and then return to the client.
The second layer of proxy---> proxy is not implemented, and for Redis Cluster clusters, commands are distributed to different instances on the backend. Due to the network problem, Redis service problem, moved jump caused by the first post to, the result set disorderly sequence certainly occurs, and is the norm. So the simple and intuitive solution, for each request encapsulation, add 64 bits of SEQ, this sequence number is Session level.
Type Wrappedresp struct {
SEQ Int64//Session level self-increment 64-bit ID
Resp RESP//Redis protocol Results
}
The second layer turns on goroutine, and whenever the Proxy receives a response, it checks to see if the Seq matches the serial number of the sending side. There are three scenarios:
1. Seq is equal to the sending end number : This is the ideal situation, in the Session layer directly writeprotocol write to the Client
2. Seq is larger than the Send end sequence number : The description has been disorderly, the Seq result is put aside, but not infinite cache, if the sequence number is too long, or wait longer, then generate a ERRORRESP return to the client. Currently only determines the ordinal number, does not use the timeout to solve.
3. seq is smaller than the send-side sequence number : The Received SEQ is small, indicating that it has been skipped. Ignore it directly and log Debug.
Moved and ask
Backend client----Proxy, detection is ERRORRESP, not the normal logic to go. Otherwise, it is further determined whether the error code prefix is moved or ask, and then executes the Redirect logic execution request. If MOVED, the topology is flushed asynchronously.
Performance optimization
Turn on Pprof view performance, reference official documentation and YJF Blog
Import _ "Net/http/pprof"
Go func () {
Log. Warning (http. Listenandserve (": 6061", nil))
}()
Go tool pprof-pdf./archer http://localhost:6061/debug/pprof/profile-output=/tmp/report.pdf
Or go inside to execute the command view
Go tool pprof./archer Http://localhost:6061/debug/pprof/profile
Int Turn []byte
Pprof util. Itob
See util in the Pprof diagram. The Itob call is inefficient, and this function converts Int to []byte, which is used to generate the length when resp.encoding, the first version is implemented as follows:
Func Itob (i int) []byte {
return []byte (StrConv. Itoa (i))
}
Second edition Iu32tob
Func Iu32tob (i int) []byte {
Return StrConv. Appenduint (Nil, UInt64 (i), 10)
}
Third version Iu32tob2
Func iu32tob2 (i int) []byte {
BUF: = Make ([]byte, 10)//The creation of a large number of small objects is a problem and also requires an object pool
IDX: = Len (buf)-1
For I >= 10 {
BUF[IDX] = byte (' 0 ' + i%10)
i = I/10
idx--
}
BUF[IDX] = byte (' 0 ' + i)
Return BUF[IDX:]
}
Do Benchmark results as follows, replace Itob with the third version of IU32TOB2
Localhost:util dzr$ go test-v-bench= ". *"
Testing:warning:no tests to run
PASS
Benchmark_itob-4 10000000 Ns/op
Benchmark_iu32tob-4 20000000 98.4 ns/op
Benchmark_iu32tob2-4 20000000 80.2 Ns/op
OK github.com/dongzerun/archer/util 5.101s
Turn on Pprof again to see Readprotocol and Writeprotocol syscall the most, and Resp.encode () will have a lot of bytes. The Buffer object is generated and should be made into an object pool. Then Resp's encode method should be changed:
Resp.encode () []byte
Become
Resp.encode (w *bufio. Writer) Error
Pprof
Pressure measurement data
Standalone native single-unit Redis
ping_inline:139664.81 Requests per second
ping_bulk:144092.22 Requests per second
set:146412.89 Requests per second
get:145921.48 Requests per second
incr:142166.62 Requests per second
lpush:144634.08 Requests per second
lpop:141302.81 Requests per second
sadd:139567.34 Requests per second
spop:142714.42 Requests per second
Lpush (needed to benchmark Lrange): 144655.00 Requests per second
LRANGE_100 (first elements): 65355.21 requests per second
lrange_300 (first elements): 26616.98 requests per second
lrange_500 (first elements): 18669.26 requests per second
lrange_600 (first elements): 14510.21 requests per second
MSET (keys): 121995.86 Requests per second
Single Proxy back-end Redis Cluster 3 master nodes, no object pool used
ping_inline:100361.30 Requests per second
ping_bulk:96918.01 Requests per second
set:92131.93 Requests per second
get:90612.54 Requests per second
incr:91852.66 Requests per second
lpush:84645.34 Requests per second
lpop:87092.84 Requests per second
sadd:88300.22 Requests per second
spop:90851.27 Requests per second
Lpush (needed to benchmark Lrange): 88448.61 Requests per second
LRANGE_100 (first elements): 25277.42 requests per second
lrange_300 (first elements): 10484.71 requests per second
lrange_500 (first elements): 7604.97 requests per second
lrange_600 (first elements): 5883.36 requests per second
MSET (keys): 17710.71 Requests per second
Single Proxy back-end Redis Cluster 3 master nodes,sync. Pool open Bytes.buffer Object pools
ping_inline:109829.77 Requests per second
ping_bulk:102743.25 Requests per second
set:91290.85 Requests per second
get:92790.20 Requests per second
incr:93466.68 Requests per second
lpush:90604.34 Requests per second
lpop:90277.16 Requests per second
sadd:85682.46 Requests per second
spop:91432.75 Requests per second
Lpush (needed to benchmark Lrange): 89726.33 Requests per second
LRANGE_100 (first elements): 25667.35 requests per second
lrange_300 (first elements): 10589.07 requests per second
lrange_500 (first elements): 7683.91 requests per second
lrange_600 (first elements): 5826.89 requests per second
MSET (keys): 17955.90 Requests per second
It's better than not using an object pool ...
Conclusion
Performance data is not ideal, and then find Pprof what can be optimized, a lot of SysCall Runtime is not very understanding, and then consolidate the next Go Foundation. Code format is not beautiful ^_^
Recently listened to the 16-year-old boy, Yue Yang story. 16-year-old flowering, litter some non-resistant
Fortunately his words have been a song, most like Lei fill the "Let me secretly See you", look forward to tomorrow he will sing this song ...