Loss of high grammar, better reading experience see original:
Https://www.cnblogs.com/cloudgeek/p/9497801.html
Let's talk about 3 at a time today, catch up on the progress, and start talking early kubernetes!
From the Groupcache project directory structure, we are today to learn GROUPCACHEPB, LRU, singleflight these 3 package:
First, Protobuf
There are 2 files in this catalogue: go and proto suffixes. The proto suffix file is related to protocol buffers, so let's see what protocol buffers is.
This project can be seen on GitHub: Https://github.com/google/protobuf
Google's, is not an instant interest?
Official Introduction: Protocol buffers (a.k.a, PROTOBUF) is Google's language-neutral, platform-neutral, extensible mechanism for Seria lizing structured data. Simply speaking is a cross-language cross-platform extensible architecture for serialization. Translation is a bit awkward, or directly read English good understanding ... OK, now you know this is used to do data serialization, do you remember golang a data structure serialization encoding/decoding tool gob? Previously we have specifically introduced: "Golang-gob and RPC."
OK, read gob This article, we know protobuf need to solve the basic problem, the following we combine the source to see protobuf knowledge points.
$GOPATH \src\github.com\golang\groupcache\groupcachepb\groupcache.proto content is as follows:
1syntax = "proto2"; 2 3package groupcachepb; 4 5message GetRequest { 6 required string group = 1; 7 required string key = 2; // not actually required/guaranteed to be UTF-8 8} 910message GetResponse {11 optional bytes value = 1;12 optional double minute_qps = 2;13}1415service GroupCache {16 rpc Get(GetRequest) returns (GetResponse) {17 };18}
You can see that this is a data definition format for some kind of syntax, so let's start by introducing the concepts involved here:
The main data types in PROTOBUF are:
Standard data types: integer, float, string, etc.
Composite data types: enumerations and message types
Look at the message section:
Message GetResponse {
Optional bytes value = 1;
Optional double MINUTE_QPS = 2;
}
There is a tag at the end of each field, and this tag requires no repetition, such as 1, 2;
Each field has a type, such as bytes, double here;
The optional meaning at the beginning of each field is:
-
Required: Must be assigned, cannot be empty
Optional: can be assigned or not assigned to a value
Repeated: This field can be repeated any number of times, including 0 times
Now we can understand that the name of the message is GetResponse, there are 2 optional fields, value and MINUTE_QPS, and the types of the two fields are optional for bytes and double,2 fields respectively.
PROTOBUF also provides a definition of the package, as long as it is defined at the beginning of the file, so the packages GROUPCACHEPB; This line is good to understand; the first line syntax = "Proto2"; Obviously the version of the Declaration, in addition to Proto2 outside the Proto3 version, similar and py2 after the py3.
The last few lines left here are a little confusing:
Service Groupcache {
RPC Get (getrequest) returns (GetResponse) {
};
}
Here you can see that the beginning of the service, the Middle field is an RPC-related function-like thing, the parameters and return values are defined above the message:getrequest and GetResponse, Obviously here and RPC to have a relationship, the details we do not speak, to the back of the call to the place we combine the business code to understand the details here.
Second, LRU
Check the Baidu encyclopedia, you can get LRU explained as follows:
A page replacement algorithm for memory management, in which data blocks (memory blocks) that are not used in memory are called LRU, the operating system frees up space to load additional data based on what data belongs to the LRU and moves it out of memory.
What is the LRU algorithm? LRU is the abbreviation of Least recently used , which is the least used recently, and is often used in the page replacement algorithm for virtual page Storage Management Services.
So the LRU package here is also used to implement the LRU algorithm, the detailed explanation I put in the note: $GOPATH \src\github.com\golang\groupcache\lru\lru.go:
1// package lru implements an lru cache. 2//"LRU Package for Implementing Lru cache" 3package lru 4 5import "Container/list" 6 7// cache is an lru cache. it is not safe for concurrent access. 8//"Cache structure for implementing Lru cache algorithm; concurrent access insecure" 9type cache struct {10 // Maxentries is the maximum number of cache entries before 11 // an item is evicted. zero means no limit. 12 //"Maximum number of entries, that is, the cache of up to a few data, more than the trigger data elimination; 0 means no Limit" 13 maxentries int 15 // onevicted optionally specificies a callback Function to be 16 // executed when an entry is Purged from the cache. 17 //"Pre-Destruction callback" 18 onevicted func (key key, value interface{}) 20 //"Linked list" 21 ll *list. List 22 //"key is any type, and the value is a pointer to a node in the list" 23 cache map[interface{}]* List. Element-26// a key may be any value that is comparable. 27// see http://golang.org/ref/spec#comparison_operators 28//"Any comparable type" 29type key interface{ } 30 31//"Access entry structure, wrapper key value" 32type entry struct {33 key key 34 value interface{} 37// new creates a new cache. 38// if maxentries is zero, the cache has no limit and It ' s assumed 39// that eviction is done by the caller. 40//"Initializing an instance of a cache type" 41func new (maxentries int) *cache {42 return &cache{43 maxentries: maxentries, 44 ll: list. New (), 45 cache: make (map[ Interface{}]*list. Element), 46 } 49// add adds a value to the cache . 50//"Add a value to the cache" 51func (c *cache) add (key key, value interface{}) {52 //"If the cache is not initialized, initialize it first, create the cache and L1" 53 if c.cache == nil {54 c.cache = make (map[interface{}]*list. Element) 55 c.ll = list. New () 56 } 57 //"If key already exists, move the record forward to the head and set the VALue "58 if ee, ok := c.cache[key]; ok {59 c.ll.movetofront (EE) 60 ee. Value. (*entry). Value = value 61 return 62 } 63 //"key does not exist, create a record, insert the list header, Ele is the pointer to this element" 64 //" The element here is a *entry type, and Ele is *list. Element type "65 ele := c.ll.pushfront (&entry{key, value}) 66 //cache This map setting key to the key type Key,value is *list. The ele 67 c.cache[key] = ele 68 //"list length of element type exceeds the maximum entry value, Trigger cleanup operation "69 if c.maxentries != 0 && c.ll.len () > c.maxentries {70 c.removeoldest () 71 } 74// GET&NB}Sp;looks up a key ' S value from the cache. 75//"Find value by Key" 76func (C *cache) get (key key) (Value interface{}, ok bool) {77 if c.cache == nil {78 return 79 } 80 //"If present" 81 if ele, hit := c.cache[key]; hit {82 //"Move this element to the list head" 83 c.ll.movetofront (ele) 84 //"Returns the value of entry" 85 return ele. Value. (*entry). Value, true 86 } 87 return, 90// Remove removes the provided key from the cache. 91//"If key exists, call removeelement to delete the element in the linked list and cache" 92func (C *cache) remove (key key) {93 if c.cache == nil {94 return 95 } 96 if ele, hit := c.cache[key]; hit {97 c.removeelement (Ele) 98 } 99}100101// removeoldest removes the oldest item from the cache.102//"Delete oldest element" 103func (C *cache) Removeoldest () {104 if c.cache == nil {105 return106 }107 //"Ele for *list. Element type, which points to the tail node of the linked list "108 ele := c.ll.back () 109 if Ele != nil {110 c.removeelement (ele) 111 }112}113114func (C *cache) removeelement (e *list. Element) {115 //"Remove an element from the list" 116 c.ll.remove (e) 117 //"E.value nature is *entry type, entry structure contains key and value2 properties" 118 //" Value itself is a interface{} type, which is converted from type assertion to *entry type "119 kv := e.value. (*entry) 120 //"Delete the cache in this map key is Kv.key this element, that is, after the list is deleted, the cache will be deleted" 121 Delete (C.cache, kv.key) 122 if c.onevicted != nil {123 c.onevicted (kv.key, kv.value) 124 }125} 126127// len returns the number of items in the cache.128//" Returns the number of item in the cache, obtained by the Len () method of the list 129func (C *cache) len () int {130 if c.cache == nil {131 return 0132 }133 return c.ll.len () 134}135136// clear purges all stored items from the cache.137//"Delete all the entries in the cache, and if there is a callback function onevicted (), call all the callback functions first, Then place the empty "138func (C *cache) clear () {139 if c.onevicted != nil {140 for _, e := range C.cache {141 kv := e. Value. (*entry) 142 c.onevicted (kv.key, Kv.value) 143 }144 }145 c.ll = nil146 c.cache = nil147}
Third, Singleflight
This package basically implements a function that suppresses repeated execution of the same function call. For example, to enter a function call to a regular program A () requires a 10s return result, when 10 clients call this a (), it may take 100s to complete all the calculations, but the calculation is repeated, the result is the same. So you can think of a way to judge the situation of the same calculation process, do not need to repeat the execution, directly wait for the last calculation to complete, and then return to the results in a row. Let's take a look at how this algorithm is implemented in Groupcache:
1// package singleflight provides a duplicate function call Suppression 2// mechanism. 3//"Single Flight" provides a suppression mechanism for repetitive call functions "4package singleflight 5 6import " Sync "7 8// call is an In-flight or completed do call 9//"Do process performed or completed" 10type call struct {11 wg sync. waitgroup12 val interface{}13 err error14}1516// Group represents a class of work and forms a namespace in which17// units of work can be executed with duplicate Suppression.18//"represents a class of work that makes up the concept of a namespace, and a Group call will have" repetitive suppression "19type group struct {20 mu sync. mutex // protects m21 //"lazy initialization; The value of this map is *call,call is the struct above" 22 m map[string]*call // lazily initialized23}2425// Do executes and returns the Results of the given function, making26// sure that only one execution is in-flight for a given key at a27// time. if a duplicate comes in, the duplicate caller waits for the28// original to complete and receives the same results.2930//" Do receives a function that executes and returns the result,31// this process ensures that the same key has only one execution procedure at a time;32// repeated calls will wait for the most primitive call procedure to complete and receive the same result "33func (g *group) do (Key string, fn func () (Interface{}, error)) (interface{}, error) {34 g.mu.lock () 35 if g.m == nil {36 g.m = make (Map[string]*call) 37 }38   //"If this call has a process with the same name, wait for the initial invocation to complete, and then return Val and Err" 39 if c, ok := g.m[ Key]; ok {40 g.mu.unlock () 41 c.wg.wait () 42 //"When all Goroutine executed, The execution result Val and err are stored in call, and then this returns "43 return c.val, c.err44 }45 //"Get pointer to call struct type" 46 c := New (call) 47 //"A goroutine Begins, Add (1), where it is executed at most once, i.e. no concurrent calls to the following FN ()" 48 c.wg.add (1) 49 //"similar to setting the name of a function call" key "corresponds to calling procedure C" 50 g.m[key " = c51 g.mu.unlock () 5253 //"function call procedure" 54   C.VAL, C.ERR = FN () 55 //"done here corresponds to wait in the above if" 56 c.wg.done () 5758&NBSp; g.mu.lock () 59 //"execution complete, delete this key" 60 delete (G.M , key) 61 g.mu.unlock () 6263 return c.val, c.err64}
Today, there may be a little more, one of the design to the list and so on, I hope everyone through the Internet to master this kind of I did not mention the small knowledge point, thoroughly thoroughly understand the source code of these several package.
Looking back at the project results, in addition to TESTPB package outside the package we have finished, TESTPB is the GROUPCACHEPB corresponding test program, the next we can put these packages out of all the program analysis, including the protobuf part of the call logic.
Today is here, Groupcache source parsing still left the last talk!
92 Reads