golang:實現thrift的client端協程安全

來源:互聯網
上載者:User
這是一個建立於 的文章,其中的資訊可能已經有所發展或是發生改變。

前言

Golang作為我們服務端開發的主要語言,實現了很多基礎服務,比如oauth2,賬戶系統,支付,客服等。而在早期開發階段,也嘗試使用golang做頁面展示,但每種語言都有自己最擅長的領域,讓golang來搞前端實在是有點疼,最終我們選擇php+golang的方式來作為整體的服務架構。

那麼問題來了,php和golang這對好基友如何愉快的玩耍呢?結論是thrift是塊好肥皂!

拋磚

市面上肥皂一大堆,最著名的是舒膚佳,那麼我們為毛不用舒膚佳,而選擇thrift呢。。。因為夠酸爽!

這種酸爽,只有牙口好,才能吃嘛嘛香。眾所周知,thrift有多種型號(傳輸協議),比如家用型的TDebugProtocol,持久型TBinaryProtocol還有爆炸型TCompactProtocol。

而我們使用初始,想當然的選擇了爆炸型TCompactProtocol這種更能讓酸爽感提升百分之10的型號。但是php牙口不太好,遇到golang搓出的64位int時,1234567890硬是給爆成了1234567891(此處只是舉個例子,php在處理golang返回的int64會出現錯誤的結果)。所以,php和golang這對好基友,thrift爆炸酸爽好,不如thrift持久好。(據說thrift下一個版本會修複這個bug,敬請關注吧)

引玉

亂扯一通,引據經典,發現Thrift產生的server端是thread safe的,但client端不是。所以需要多個thread和server端通訊,則每個thread需要init一個自己的client執行個體。

那麼問題來了,golang是如何?thrift的client端協程安全呢?

實踐

首先,thrift實現golang的server端,依託golang牛叉的goroutine,只實現了一種類似TThreadedServer的服務模型,所以毛老師再也不用擔心我滴使用了。

func (p *TSimpleServer) AcceptLoop() error {    for {        select {        case <-p.quit:            return nil        default:        }        client, err := p.serverTransport.Accept()        if err != nil {            log.Println("Accept err: ", err)        }        if client != nil {            go func() {// 起新routine處理                if err := p.processRequests(client); err != nil {                    log.Println("error processing request:", err)                }            }()        }    }}

其次,thrift的client端都是線程不安全的,那麼問題來了,重新實現Transport好搞,還是在現有Transport的基礎上使用pool好?

還在我思考如何修改Transport的實現時,毛老師已經搞定了pool,那麼結論來了,在Transport基礎上使用pool好。。。即便重新實現也無非是加pool,這樣一來還得改thrift的client實現,真是費時費力又不討好。thrift預設實現的Transport有基礎的讀寫功能,丟到pool裡照樣遊來遊去。

以下是毛老師實現的pool,有基本的逾時檢查,最大啟用和空閑數等功能。

type Pool struct {    // Dial is an application supplied function for creating new connections.    Dial func() (interface{}, error)    // Close is an application supplied functoin for closeing connections.    Close func(c interface{}) error    // TestOnBorrow is an optional application supplied function for checking    // the health of an idle connection before the connection is used again by    // the application. Argument t is the time that the connection was returned    // to the pool. If the function returns an error, then the connection is    // closed.    TestOnBorrow func(c interface{}, t time.Time) error    // Maximum number of idle connections in the pool.    MaxIdle int    // Maximum number of connections allocated by the pool at a given time.    // When zero, there is no limit on the number of connections in the pool.    MaxActive int    // Close connections after remaining idle for this duration. If the value    // is zero, then idle connections are not closed. Applications should set    // the timeout to a value less than the server's timeout.    IdleTimeout time.Duration    // mu protects fields defined below.    mu     sync.Mutex    closed bool    active int    // Stack of idleConn with most recently used at the front.    idle list.List}type idleConn struct {    c interface{}    t time.Time}// New creates a new pool. This function is deprecated. Applications should// initialize the Pool fields directly as shown in example.func New(dialFn func() (interface{}, error), closeFn func(c interface{}) error, maxIdle int) *Pool {    return &Pool{Dial: dialFn, Close: closeFn, MaxIdle: maxIdle}}// Get gets a connection. The application must close the returned connection.// This method always returns a valid connection so that applications can defer// error handling to the first use of the connection.func (p *Pool) Get() (interface{}, error) {    p.mu.Lock()    // if closed    if p.closed {        p.mu.Unlock()        return nil, ErrPoolClosed    }    // Prune stale connections.    if timeout := p.IdleTimeout; timeout > 0 {        for i, n := 0, p.idle.Len(); i < n; i++ {            e := p.idle.Back()            if e == nil {                break            }            ic := e.Value.(idleConn)            if ic.t.Add(timeout).After(nowFunc()) {                break            }            p.idle.Remove(e)            p.active -= 1            p.mu.Unlock()            // ic.c.Close()            p.Close(ic.c)            p.mu.Lock()        }    }    // Get idle connection.    for i, n := 0, p.idle.Len(); i  0 && p.active >= p.MaxActive {        p.mu.Unlock()        return nil, ErrPoolExhausted    }    // No idle connection, create new.    dial := p.Dial    p.active += 1    p.mu.Unlock()    c, err := dial()    if err != nil {        p.mu.Lock()        p.active -= 1        p.mu.Unlock()        c = nil    }    return c, err}// Put adds conn back to the pool, use forceClose to close the connection forcelyfunc (p *Pool) Put(c interface{}, forceClose bool) error {    if !forceClose {        p.mu.Lock()        if !p.closed {            p.idle.PushFront(idleConn{t: nowFunc(), c: c})            if p.idle.Len() > p.MaxIdle {                // remove exceed conn                c = p.idle.Remove(p.idle.Back()).(idleConn).c            } else {                c = nil            }        }        p.mu.Unlock()    }    // close exceed conn    if c != nil {        p.mu.Lock()        p.active -= 1        p.mu.Unlock()        return p.Close(c)    }    return nil}// ActiveCount returns the number of active connections in the pool.func (p *Pool) ActiveCount() int {    p.mu.Lock()    active := p.active    p.mu.Unlock()    return active}// Relaase releases the resources used by the pool.func (p *Pool) Release() error {    p.mu.Lock()    idle := p.idle    p.idle.Init()    p.closed = true    p.active -= idle.Len()    p.mu.Unlock()    for e := idle.Front(); e != nil; e = e.Next() {        p.Close(e.Value.(idleConn).c)    }    return nil}

最後,在實際使用thrift相關的設定貌似只有逾時時間,那麼問題來了,pool下,thrift的逾時時間如何是好?

由於在使用pool之前,使用每個routine建立一個client的方式,逾時時間設定的都很短,server端和client都是15秒。換了pool使用方式之後,時間沒變,也就是說我們把逾時交給thrift自己管理,但發現經常性的出現EOF的I/O錯誤。經過跟蹤發現,在請求量小的情況下,15秒就顯得太短了,pool裡會easy的出現空閑時間超過15秒的串連,而當我們get出來使用時,因為逾時,導致了EOF。

經過實踐,server端的時間一定要足夠長,我們設定了8h,client端的逾時則交給pool管理,不然pool裡還有可能出現逾時的串連。

// server    transportFactory := thrift.NewTFramedTransportFactory(thrift.NewTTransportFactory())    protocolFactory := thrift.NewTBinaryProtocolFactoryDefault()    serverTransport, err := thrift.NewTServerSocketTimeout(bind, thriftCallTimeOut)    if err != nil {        log.Exitf("start thrift rpc error(%v)", err)    }    // thrift rpc service    handler := NewThriftRPC()    processor := thriftRpc.NewRpcServiceProcessor(handler)    server := thrift.NewTSimpleServer4(processor, serverTransport, transportFactory, protocolFactory)    thriftServer = append(thriftServer, server)    log.Info("start thrift rpc listen addr: %s", bind)    go server.Serve()// clientthriftPool = &pool.Pool{    Dial: func() (interface{}, error) {        addr := conf.MyConf.ThriftOAuth2Addr[rand.Intn(len(conf.MyConf.ThriftOAuth2Addr))]        sock, err := thrift.NewTSocket(addr)  // client端不設定逾時        if err != nil {            log.Error("thrift.NewTSocketTimeout(%s) error(%v)", addr, err)            return nil, err        }        tF := thrift.NewTFramedTransportFactory(thrift.NewTTransportFactory())        pF := thrift.NewTBinaryProtocolFactoryDefault()        client := rpc.NewRpcServiceClientFactory(tF.GetTransport(sock), pF)        if err = client.Transport.Open(); err != nil {            log.Error("client.Transport.Open() error(%v)", err)            return nil, err        }        return client, nil    },    Close: func(v interface{}) error {        v.(*rpc.RpcServiceClient).Transport.Close()        return nil    },    MaxActive:   conf.MyConf.ThriftMaxActive,    MaxIdle:     conf.MyConf.ThriftMaxIdle,    IdleTimeout: conf.MyConf.ThriftIdleTimeout,}pool.idleTimeout 7h // pool最大空閑時間,設定比server端小,都設定8h,也有可能出現逾時串連
相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.