The function cache of "The.go.programming.language.2015.11.pdf" realizes __ function

Source: Internet
Author: User

Serial implementation using go in parallel to add mutex final method 1 use pointers to mark the End Method 2 use the Client server model

The article 9.7 example:concurrent non-blocking Cache
This example implements a function that caches the function so that it only needs to be computed once for the same parameter. This method is still concurrent-safe and avoids the competition caused by the entire cache lock.

Let's look at the implementation of serial implementation serially

Func httpgetbody (URL string) (interface{}, error) {resp, err: = http. Get (URL) If Err!= nil {return nil, err} defer resp. Body.close () return ioutil. ReadAll (resp. Body)} type result struct {value interface{} Err Error} type Func Func (key String) (interface{}, error) Typ e Memo struct {f Func cache Map[string]result} Func New (f Func) *memo {return &memo{f:f, cache:m Ake (Map[string]result)}} func (Memo *memo) get (Key string) (interface{}, error) {res, OK: = Memo.cache[key] If !ok {res.value, Res.err = MEMO.F (key) Memo.cache[key] = res} return res.value, Res.err} func t Estcache () {incomingurls: = []string{"http://cn.bing.com/", "http://www.baidu.com", "http://cn.bing.com/", "http://w Ww.baidu.com "," http://www.baidu.com "," http://cn.bing.com/"," http://www.baidu.com "," http://www.baidu.co M "," http://cn.bing.com/"," http://www.baidu.com "," Http://www.baidU.com "," http://cn.bing.com/"," http://www.baidu.com "," http://www.baidu.com "," http://cn.bing.com/"," http://www. " Baidu.com "} m: = New (httpgetbody) Allstart: = time. Now () for _, url: = Range incomingurls {start: = time. Now () value, err: = M.get (URL) If Err!= nil {fmt. PRINTLN (Err)} FMT. Printf ("%s,%s,%d bytes\n", url, time.) Since (Start), Len (value. []byte))} FMT. Printf (' All%s\n ', time. Since (Allstart))}

Execution results

http://cn.bing.com/, 180.576553ms, 120050 bytes
http://www.baidu.com, 25.863523ms, 99882 bytes, http://
cn.bing.com/, 397ns, 120050 bytes
http://www.baidu.com, 245ns, 99882 bytes http://www.baidu.com
, 154ns, 99882 Bytes
http://cn.bing.com/, 123ns, 120050 bytes
http://www.baidu.com, 136ns, 99882 bytes, http://
Www.baidu.com, 123ns, 99882 bytes
http://cn.bing.com/, 127ns, 120050 bytes http://www.baidu.com
, 188ns, 99882 Bytes
http://www.baidu.com, 116ns, 99882 bytes
http://cn.bing.com/, 123ns, 120050 bytes, http://
Www.baidu.com, 118ns, 99882 bytes
http://www.baidu.com, 180ns, 99882 bytes http://cn.bing.com/
, 140ns, 120050 bytes
http://www.baidu.com, 124ns, 99882 bytes all
206.583298ms
using go for parallel execution

We use sync. Waitgroup to wait for all URL parsing to complete

Func Testcache () {incomingurls: = []string{"http://cn.bing.com/", "http://www.baidu.com", "http://cn.bing.com/", "htt P://www.baidu.com "," http://www.baidu.com "," http://cn.bing.com/"," http://www.baidu.com "," Http://www.bai Du.com "," http://cn.bing.com/"," http://www.baidu.com "," http://www.baidu.com "," http://cn.bing.com/"," http://www
    . baidu.com "," http://www.baidu.com "," http://cn.bing.com/"," Http://www.baidu.com "} m: = New (Httpgetbody) Allstart: = time. Now () var n sync. Waitgroup for _, url: = Range incomingurls {start: = time.
                Now () N.add (1) Go func (URL string) {value, err: = M.get (URL) If Err!= nil { Fmt. PRINTLN (Err)} FMT. Printf ("%s,%s,%d bytes\n", url, time.) Since (Start), Len (value. []byte)) N.done ()} (URL) n.wait ()} FMT. Printf (' All%s\n ', time. Since (Allstart))}

The result is a shorter time, but there is a competitive relationship.

    If!ok {
        res.value, Res.err = MEMO.F (key)
        Memo.cache[key] = res
    }

May be a goroutine judge!ok, the other goroutine also judged!ok,f or executed several times. To add a mutual exclusion lock

Type Memo struct {
    f     Func
    mu    sync. Mutex
    cache Map[string]result
}

func (Memo *memo) get (Key string) (interface{}, error) {
    Memo.mu.Lock ()
    defer memo.mu.Unlock ()
    res, OK: = Memo.cache[key]
    if!ok {
        res.value, Res.err = MEMO.F (key)
        Memo.cache[key] = res
    } return
    Res.value, Res.err
}

But this creates a problem and memo back to serial access. End Method 1: Use pointer markers

The author's idea is to achieve such a result, a goroutine call function, complete the time-consuming work, other calls to the same function of the Goroutine wait function completed immediately after the result.

Type result struct {
    value interface{}
    err   error
}
type entry struct {
    res
    Result Ready Chan struct{}
type Func Func (key String) (interface{}, error)
type Memo struct {
    f     func< C13/>mu    sync. Mutex
    cache Map[string]*entry
}

func New (f func) *memo {return
    &memo{f:f, Cache:make (map[ string]*entry)}

func (Memo *memo) get (Key string) (interface{}, error) {
    memo.mu.Lock ()
    e: = Memo.cache[key]
    if E = = Nil {
        e = &entry{ready:make (chan struct{})}
        Memo.cache[key] = e
        memo.mu . Unlock ()

        e.res.value, E.res.err = MEMO.F (key) Close
        (e.ready)

    } else {
        memo.mu.Unlock ()
        <-e.ready
    } return
    E.res.value, E.res.err
}

Memo's cache member changed from Map[string]result to Map[string]*entry
The structure of the entry is:

Type entry struct {
    res   result
    ready Chan struct{}
}

The ready channel is used to inform other Goroutine functions to read the results.
The core of the code is the lower part of the Get function

    Memo.mu.Lock ()
    e: = Memo.cache[key]
    If E = nil {
        e = &entry{ready:make (chan struct{})}
        memo.cache[ Key] = e
        memo.mu.Unlock ()

The computation of function f is separated from the lock region by memo.cache[key] = e implementation only one Goroutine performs the function operation. End Method 2: Using the client server model

A dedicated service process is responsible for caching, and other Goroutine request function results to the service process.

    Func is the type's function to Memoize.
    Type Func Func (key String) (interface{}, error)//a result was the result of
    calling a Func.
    Type result struct {
        value interface{}
        err
        error
    }
    type entry struct {
        res
        result< C18/>ready Chan struct{}//Closed when res is ready
    }

Here is the key part of the code

Type request struct {
    key      string
    response chan<-result
}

type Memo struct {
    requests Chan Request
}

The member of the memo is a channel requests with a value type request, which is used to send function requests to the server, and the request type contains a channel of result that is passed to the server after the server is used to give the corresponding goroutine transfer function.
The new function primarily creates the request channel and starts the server:

Func New (f func) *memo {
    Memo: = &memo{request:make (chan request)} go
    memo.server (f) return
    Memo
}

The Get function creates a result channel response, constructs the resquest, and sends it through the requests channel to the server, accepting the function results through response .

Func (Memo *memo) get (Key string) (interface{}, error) {
    response: = Do (chan result)
    memo.requests <-Reque St{key, Response}
    Res: = <-response return
    res.value, Res.err
}
func (Memo *memo) Close () { Close
    (memo.requests)
}

Next is the server program

func (Memo *memo) server (f func) {cache: = make (Map[string]*entry) for req: = Range Memo
            . Requests {e: = Cache[req.key] If E = nil {//This is the the " E = &entry{ready:make (chan struct{})} Cache[req.key] = e go e.call (f, req.key)/CAL L f (key)} go E.deliver (Req.response)} func (e *entry) call (f func, key string) {//Evaluate
    The function.
    E.res.value, E.res.err = f (key)//broadcast the Ready condition.
    Close (E.ready)} func (e *entry) deliver (response chan<-results) {//wait function execution end <-e.ready//Send result to Client Response <-E.res} 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.