This is a creation in Article, where the information may have evolved or changed.
Background
Project refactoring, transferred from C + + to Golang, found that there is a glitch in a cache operation, response time to 1s, or 2s, the normal peak period is not more than 10ms.
On-line Io.png
Phenomenon such as, the problem is very simple, the troubleshooting process is very chaotic. Still too young ...
Cache Support Side Troubleshooting
Find the cache classmates, their graphs show that all requests are normal back, there will be retries at the peak, the average time-consuming increase, but rarely more than 100ms. So the question is, is it a network problem?
Think about it, network problems are usually caused by the switch, it is impossible for us to have this phenomenon, and is occasionally, not the regular burr. Exclude the cache itself.
Set timeout time Why doesn't it work?
View configuration Find our code is wrong, the cache time-out is configured to another dependent time, the maximum is 2s ...
Fix after debugging in the simulation environment, still appear response more than 1s phenomenon, that is, Setreaddeadline is not limited to live. This time to check the bug start deviation: suspected thrift Golang Library problem .
Thrift Golang Timeout Call mode
Project does not use the context to do interface timeout control, but with thrift SetTimeout. View Code discovery Logic is problematic, or we understand deviations.
func (p *TSocket) pushDeadline(read, write bool) { var t time.Time if p.timeout > 0 { t = time.Now().Add(time.Duration(p.timeout)) } if read && write { p.conn.SetDeadline(t) } else if read { p.conn.SetReadDeadline(t) } else if write { p.conn.SetWriteDeadline(t) }}func (p *TSocket) Read(buf []byte) (int, error) { if !p.IsOpen() { return 0, NewTTransportException(NOT_OPEN, "Connection not open") } p.pushDeadline(true, false) n, err := p.conn.Read(buf) return n, NewTTransportExceptionFromError(err)}func (p *TSocket) Write(buf []byte) (int, error) { if !p.IsOpen() { return 0, NewTTransportException(NOT_OPEN, "Connection not open") } p.pushDeadline(false, true) return p.conn.Write(buf)}
It can be found that timeout is for a single TCP socket operation, not an interface level. When a packet is returned large, each TCP read resets the timeout, guessing that multiple TCP read time-outs accumulate resulting in a timeout failure.
You want to use the context to limit the interface timeout ...
Continue checking each TCP read time-consuming
Print one-time interface time consuming
Socket.go Read Time consuming
Statistics found that the first TCP read time is long, the basic is about 900ms. Thrift the underlying operation, write the request to the socket first, then immediately read Socket,runtime will throw the read hang to Epoll waiting for the event to be generated. Does setreaddeadline not take effect?
Change to go1.7.
Finally found that the simulation environment is go1.4, the line is go1.7, the simulation environment with 1.7 compiled, the response time is normal, and the socket timeout is in effect. So the first reaction is a bug that go1.4 extreme situations???
The GC is the problem.
By looking at GC time and occurrence point, we find that the real culprit is GC ...
The business has a large number of models loaded, using a large number of maps, a lot of memory objects, GC in go1.4 large, and did not optimize.
Gc
The entire troubleshooting process is not a clue, the production environment and simulation environment version inconsistency, configuration code error, resulting in the entire process is very inefficient, but also found thrift go timeout problem.
Final Solution
Do not use the context to do interface timeout, thrift Go Code is not changed, because we are short connection, the default is the first packet limit timeout is good, subsequent TCP packets are very fast.
However, why did go1.4 STW only suspend the first TCP packet? Shouldn't it be random?