Read valyala/fasthttp--HTTP packages faster than the official library

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

Valyala/fasthttp is net/http the HTTP server library that claims to be faster than the official library. I went to the study and found some details of the difference.

Process NET. Conn's Goroutine

process NET. The use of Conn's goroutine differs greatly from the standard library. In the standard library, net.Listener.Accept() to a connection, a goroutine is opened:

// Serve accepts incoming connections on the Listener l, creating a// new service goroutine for each.  The service goroutines read requests and// then call srv.Handler to reply to them.func (srv *Server) Serve(l net.Listener) error {    defer l.Close()    var tempDelay time.Duration // how long to sleep on accept failure    for {        rw, e := l.Accept()        if e != nil {            ......        }        ......        c, err := srv.newConn(rw)        if err != nil {            continue        }        c.setState(c.rwc, StateNew) // before Serve can return        go c.serve() // 在这里创建一个goroutine处理net.Conn的实际逻辑    }}

However, in the form of a valyala/fasthttp worker, a fixed amount of goroutine processing is turned on net.Conn .

server.go#l582:

Func (S *server) Serve (ln net. Listener) Error {var lastoverflowerrortime time. Time Var lastperiperrortime time. Time var c net.         Conn var err error Maxworkerscount: = S.getconcurrency ()//Gets the number of worker concurrency//create a worker pool WP: = &workerpool{ Workerfunc:s.serveconn,//per net. Conn processing Logic Maxworkerscount:maxworkerscount, Logger:s.logger (),}//Open the worker pool to process Chan's cleanup, dispose of No Chan WP in processing the request. Start () for {//received net from listener. Conn//The number of IP connections made inside this control, more than will return error if c, err = Acceptconn (s, LN, &lastperiperrortime); Err! = Nil {WP. Stop () If err = = Io. EOF {return nil} return err}//Let the worker pool handle net. Conn if!wp. Serve (c) {c.close () if time. Since (Lastoverflowerrortime) > time. Minute {S.logger (). PRINTF ("The incoming connection cannot be served, because%d concurrent connections is served.")          +          "Try increasing Server.concurrency", maxworkerscount) Lastoverflowerrortime = time. Now ()}} c = nil}}

The next step is wp.Serve(c) to workerpool.go#l92:

func (wp *workerPool) Serve(c net.Conn) bool {    ch := wp.getCh() // 从worker里获取一个workChan    if ch == nil {        return false        // 如果获取不到workChan,返回false        // 上面的代码提示错误,超过并发量了    }    ch.ch <- c    return true // 把net.Conn扔进workChan的chan中}

Then see how to get one workChan in workerpool.go#l101:

Func (WP *workerpool) getch () *workerchan {var ch *workerchan createworker: = False Wp.lock.Lock () Chans: = W P.ready N: = Len (Chans)-1//try to get wp.ready idle Workchan if n < 0 {//No idle Workchan, need to create new if WP.W Orkerscount < WP. Maxworkerscount {Createworker = True wp.workerscount++}} else {//from Wp.ready idle W Orkchan the last ch = chans[n] Wp.ready = chans[:n]} wp.lock.Unlock () if ch = = Nil {if!crea Teworker {return nil}//Remove a Workchan from the public pool to use VCH: = Workerchanpool.get () If vch = = Nil {//Not in public pool, create a new VCH = &workerchan{Ch:make (chan net. Conn, 1),}} ch = vch. (*workerchan)//In a goroutine process Workchan go func () {//start read operation this Workchan Wp.workerfunc (CH)//Workchan put back to public pool Workerchanpool.put (VCH)} ()} return Ch 

See above ch.ch <-C , will net. Conn was thrown into Workchan's Chan. Chan's handling logic is in wp.workerfunc (CH) , in workerpool.go#l152:

 func (WP *workerpool) workerfunc (ch *workerchan) {var c net.            Conn var err error ... for C = range ch.ch {If c = nil {//note here, pass in nil to jump out of the loop, do not handle this Workchan  Break}//Call Workerfunc to process each net.conn//This workerfunc in the code above has,//workerfunc:s.serveconn if Err = WP. Workerfunc (c); Err! = Nil && Err! = errhijacked {errstr: = Err. Error () if!strings. Contains (ERRSTR, "broken Pipe") &&!strings. Contains (ERRSTR, "reset by peer") {WP.         logger.printf ("error when serving connection%q<->%q:%s", C.localaddr (), c.remoteaddr (), Err)}}         If err! = errhijacked {c.close ()} c = nil//Remember to put it in the Wp.ready slice//for reuse If!wp.release (CH) {Break}}}  

See here to summarize. valyala/fasthttpin fact, it is to net.Conn assign to a certain number of goroutine to execute, not one-to. In other words, when the number of goroutine is huge, the cost of context switching begins to have a noticeable performance impact. The standard library faces this problem when concurrency is large. The valyala/fasthttp worker is used to circumvent the problem. The goroutine itself is a lightweight process that can be used immediately. The worker reuses each goroutine as much as possible, allowing it to control the number of Goroutine (the default maximum number of Chan is 256x1024). And if the HTTP request is blocked, workChan it will be overrun until the worker is workChan exhausted (with the KeepAlive timeout configured to handle the problem), but this limits the usage scenario. valyala/fasthttponly suitable for HTTP short connection scenes, not suitable for long connections, or websocket support.

Another discovery is *RequestCtx the context of the pool.

*requestctx's Pool

This object is used in the standard library for similar HTTP request contexts *http.response , and the problem is that each time it is new.

// Serve a new connection.func (c *conn) serve() {    ......    for {        // 这里返回的是*http.response        w, err := c.readRequest()        if c.lr.N != c.server.initialLimitedReaderSize() {            // If we read any bytes off the wire, we're active.            c.setState(c.rwc, StateActive)        }        ......        // 要求实现的 http.Handler 接口        // 在这里被使用        serverHandler{c.server}.ServeHTTP(w, w.req)        ......    }}

valyala/fasthttpSimilar structures *RequestCtx are used in pools, server.go#l743 are:

ctx := s.acquireCtx(c)// 其实就是:func (s *Server) acquireCtx(c net.Conn) *RequestCtx {    v := s.ctxPool.Get()    var ctx *RequestCtx    if v == nil {        ctx = &RequestCtx{            s: s,        }        ctx.v = ctx        v = ctx    } else {        ctx = v.(*RequestCtx)    }    ctx.initID()    ctx.c = c    return ctx}

*RequestCtx.Requestand *RequestCtx.Response support reset for more secure use, such as server.go#l776:

err = ctx.Request.Read(br)// 就是func (req *Request) Read(r *bufio.Reader) error {    req.clearSkipHeader()    err := req.Header.Read(r)    if err != nil {        return err    }    if req.Header.IsPost() {        req.body, err = readBody(r, req.Header.ContentLength(), req.body)        if err != nil {            req.Reset()            // 出错了要reset,用完了的时候同时也要            // 在L1030,releaseReader 方法            // 其实就是把r这个 *bufio.Reader 直接 Reset 再放回公共池            // 下次用的时候有一个 *bufio.Reader            return err        }        req.Header.SetContentLength(len(req.body))    }    return nil}

In general, using pools to reduce the number of objects is also the most common way to enhance performance. The standard library and valyala/fasthttp both pair *bufio.Reader and *bufio.Writer do the processing of the pool. However, for frequently accessed services, the efficiency of the pool is more limited. And sync.Pool there is no capacity control, and sometimes it becomes uncontrollable and needs attention.

Thanks

The above is the result of the in-depth reading of the Go Practice Group (386056972) and the group friends discussion. Thanks to the support of the group of Friends of China.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.