Objective
Channel is one of the iconic concepts in Golang, very good and powerful!
Channel, as the name implies, is a channel, a channel for data passing in a concurrent environment. Often combined with another important concept goroutine (go) in Golang, the concurrent programming in Golang becomes clear and concise while being efficient and powerful.
Today try to read the Golang to the channel realization source, picked up my rusty fruit knife, an affectation of anatomical anatomy of the White mouse.
Channel infrastructure
type hchan struct { qcount uint // total data in the queue dataqsiz uint // size of the circular queue buf unsafe.Pointer // points to an array of dataqsiz elements elemsize uint16 closed uint32 elemtype *_type // element type sendx uint // send index recvx: uint // receive index recvq waitq // list of recv waiters sendq waitq // list of send waiters // lock protects all fields in hchan, as well as several // fields in sudogs blocked on this channel. // // Do not change another G's status while holding this lock // (in particular, do not ready a G), as this can deadlock // with stack shrinking. lock mutex}
hchanstructure is the underlying data structure of the channel, see the source definition, can be said to be very clear.
- Qcount:channel the number of elements already in the cache queue
- Dataqsiz:channel Cache Queue Size (defines the cache size specified when the channel is used, where the channel is a ring queue)
- BUF: Pointer to channel cache queue
- Elemsize: The size of elements passed through the channel
- Closed:channel whether to close the flag
- Elemtype: Element types passed through the channel
- The index of the sending element in the queue in Sendx:channel
- The index of the accepted element in the queue in Recvx:channel
- RECVQ: Waiting for the list of threads to receive elements from the channel
- SENDQ: Waiting for the list of threads to send elements to the channel
- The lock on the Lock:channel
There is recvq a sendq simple look at the structure of the two lists used in the and waitq .
type waitq struct { first *sudog last *sudog}type sudog struct { g *g selectdone *uint32 // CAS to 1 to win select race (may point to stack) next *sudog prev *sudog elem unsafe.Pointer // data element (may point to stack)... c *hchan // channel}
As can be seen waiq is a doubly linked list structure, the node on the chain is sudog . From sudog the structure definition can be seen roughly, sudog is g a pair (that is, the association) of a package. Used to record channel information such as a process waiting on one g , waiting for an element elem , and so on.
Channel initialization
Func Makechan (t *chantype, size Int64) *hchan {elem: = T.elem//compiler checks this is safe. If Elem.size >= 1<<16 {throw ("Makechan:invalid channel Element type")} if hchansize%maxalign! = 0 || Elem.align > Maxalign {throw ("Makechan:bad alignment")} if size < 0 | | Int64 (UIntPtr (size))! = Size | | (Elem.size > 0 && uintptr (size) > (_maxmem-hchansize)/elem.size) {Panic (Plainerror ("Makechan:size Out of Range")} var c *hchan if Elem.kind&kindnopointers! = 0 | | Size = = 0 {//Allocate memory in one call. Hchan does not contain pointers interesting for GC in this case://buf points into the same allocation, Elemtyp E is persistent. Sudog's is referenced from their owning thread so they can ' t be collected. TODO (DVYUKOV,RLH): Rethink When collector can move allocated objects. c = (*hchan) (MALLOCGC (hchansize+uintptr (size) *elem.size, nil, true)) If size > 0 && elem.size! = 0 {c.buf = Add (unsafe. Pointer (c), hchansize)} else {//Race detector uses this location for synchronization//Al So prevents us from pointing beyond the allocation (see issue 9401). C.buf = unsafe. Pointer (c)}} else {c = new (Hchan) C.buf = NewArray (elem, int (size))} C.elemsize = UInt16 (elem.size) C.elemtype = Elem C.dataqsiz = UINT (size) if Debugchan {print ("makechan:chan=", C, "; Elemsize= ", Elem.size,"; Elemalg= ", Elem.alg,"; Dataqsiz= ", size," \ n ")} return C}
The first part of the 3 is a validation of the if initialization parameters.
- If Elem.size >= 1<<16:
Check channel element size, less than 2 bytes
- If hchansize%maxalign! = 0 | | Elem.align > Maxalign
I didn't read it (aligned?) )
- If size < 0 | | Int64 (UIntPtr (size))! = Size | | (Elem.size > 0 && uintptr (size) > (_maxmem-hchansize)/elem.size)
- The first one to determine the cache size needs to be greater than or equal to 0
- Int64 (UIntPtr (size))! = Size This sentence is actually used to determine if the size is a negative number. Since UIntPtr is actually an unsigned shape, negative numbers are converted into a very large positive integer that is completely different from the original number, and the positive numbers are converted without changes.
- The last sentence determines that the buffer size of the channel is smaller than the size that can be allocated in the heap.
_MaxMemis the size of the heap that can be allocated.
The second part is the specific memory allocation.
- When the element type is
kindNoPointers not a pointer type, the contiguous space is directly allocated (hchansize+uintptr (size) *elem.size) size. c.bufpoint to the Elem queue's first address after Hchan.
- If the channel cache size is 0, then
c.buf there is actually no space allocated to him.
- If the type is non
kindNoPointers , then the space of the channel and the BUF space are allocated separately (the reason for doing so is to be studied)
Channel Send
// entry point for c <- x from compiled code//go:nosplitfunc chansend1(c *hchan, elem unsafe.Pointer) { chansend(c, elem, true, getcallerpc(unsafe.Pointer(&c)))}
The channel sends, that is, the process of sending data to the channel, as in the go code for this operation c <- x .
The channel is sent by the implementation source, through chansend1() , called chansend() , where the block parameters are true .
func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool { if c == nil { if !block { return false } gopark(nil, nil, "chan send (nil chan)", traceEvGoStop, 2) throw("unreachable") }... }
chansend()First, the c judgment if c == nil : The channel is not initialized, this time will be directly called to gopark make the current process into the waiting state. And the parameters used to wake up, that is, unlockf nil no one to wake it up, so the system goes into a deadlock. So channel it must be initialized before it can be used, otherwise the deadlock.
This is followed by a formal send process, and subsequent operations are locked.
lock(&c.lock)
if c.closed != 0 { unlock(&c.lock) panic(plainError("send on closed channel")) }
If the channel is already closed state, unlock and then direct panic . In other words, we cannot send data within the closed channel.
- Send data to the receiving coprocessor
if sg := c.recvq.dequeue(); sg != nil { // Found a waiting receiver. We pass the value we want to send // directly to the receiver, bypassing the channel buffer (if any). send(c, sg, ep, func() { unlock(&c.lock) }, 3) return true }
An attempt is taken from the receive waiting coprocessor queue to fetch a co-process, and if so, direct data is sent to it. This means that the data sent to the channel will take precedence over the receive waiting queue, and if there is a process waiting for the number, it will be given directly. After unlocking, the operation is complete.
Here the send() method writes the data into the queue, which is then sg goready() processed by wake-up sg.g (i.e., the waiting-time).
- Putting data into the cache
if c.qcount < c.dataqsiz { // Space is available in the channel buffer. Enqueue the element to send. qp := chanbuf(c, c.sendx) if raceenabled { raceacquire(qp) racerelease(qp) } typedmemmove(c.elemtype, qp, ep) c.sendx++ if c.sendx == c.dataqsiz { c.sendx = 0 } c.qcount++ unlock(&c.lock) return true }
If no receive coprocessor is waiting, check to see if there are any vacancies in the channel's cache queue. If there is a vacancy, the data is placed in the cache queue.
c.sendxLocate the free space in the queue by using a cursor, and then save the data in. Move the cursor, update the data, then unlock, and the operation is complete.
if c.sendx == c.dataqsiz { c.sendx = 0 }
The processing of this section of the cursor shows that the cache queue is a ring.
gp := getg() mysg := acquireSudog() mysg.releasetime = 0 if t0 != 0 { mysg.releasetime = -1 } // No stack splits between assigning elem and enqueuing mysg // on gp.waiting where copystack can find it. mysg.elem = ep mysg.waitlink = nil mysg.g = gp mysg.selectdone = nil mysg.c = c gp.waiting = mysg gp.param = nil c.sendq.enqueue(mysg) goparkunlock(&c.lock, "chan send", traceEvGoBlockSend, 3)
If the cache is also slow, this time it can only block the sending process, and so there is a suitable opportunity, and then send the data out.
getg()Gets g A pointer to the current Coprocessor object, acquireSudog() generates one sudog , and then encapsulates the current coprocessor and related data in a linked sendq list. goparkunlock()and unlock it by turning it into a wait state. The operation is complete.
Channel reception
// entry points for <- c from compiled code//go:nosplitfunc chanrecv1(c *hchan, elem unsafe.Pointer) { chanrecv(c, elem, true)}
The channel receives, that is, the process receives data from the channel, and the go code corresponding to this operation <- c .
The channel receives the implementation of the source code, through chanrecv1() , called chanrecv() , where the block parameters are true .
func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool) {... if c == nil { if !block { return } gopark(nil, nil, "chan receive (nil chan)", traceEvGoStop, 2) throw("unreachable") }...}
As with send, the receive will also first check if C is nil, and if nil, the gopark() dormant current coprocessor will be called, resulting in a deadlock.
The receive operation is also preceded by a lock, and then the formal operation begins.
if c.closed != 0 && c.qcount == 0 { if raceenabled { raceacquire(unsafe.Pointer(c)) } unlock(&c.lock) if ep != nil { typedmemclr(c.elemtype, ep) } return true, false }
The receive and send is slightly different, when the channel is closed and the channel's cache queue has no data, then the receive action will end directly, but will not error.
That is, the data is allowed to be received from the closed channel.
- Receive from the Send wait process
if sg := c.sendq.dequeue(); sg != nil { // Found a waiting sender. If buffer is size 0, receive value // directly from sender. Otherwise, receive from head of queue // and add sender's value to the tail of the queue (both map to // the same buffer slot because the queue is full). recv(c, sg, ep, func() { unlock(&c.lock) }, 3) return true, true }
An attempt is taken to remove a wait coprocessor from the Send wait list, and if present, the method is called to recv() receive data.
The method here is a little recv() more complicated than the send() method, and we simply analyze it.
func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int) { if c.dataqsiz == 0 { ... if ep != nil { // copy data from sender recvDirect(c.elemtype, sg, ep) } } else { qp := chanbuf(c, c.recvx) ... // copy data from queue to receiver if ep != nil { typedmemmove(c.elemtype, ep, qp) } // copy data from sender to queue typedmemmove(c.elemtype, qp, sg.elem) c.recvx++ if c.recvx == c.dataqsiz { c.recvx = 0 } c.sendx = c.recvx // c.sendx = (c.sendx+1) % c.dataqsiz } sg.elem = nil gp := sg.g unlockf() gp.param = unsafe.Pointer(sg) if sg.releasetime != 0 { sg.releasetime = cputicks() } goready(gp, skip+1)}
recv()The receive action is divided into two situations:
- C.dataqsiz = = 0: When the channel is a non-cached channel, the data in the process is sent directly to the receiver.
- C.dataqsiz! = 0: If the channel has a cache, then:
Take one from the cache queue and copy it to the recipient based on the cached receive cursor
The data in the process is sent to the empty cache location and the cursor moves down. (New data will be received on the tail of the queue)
Channel Receive Operation unlocked
Wake-up send-out process
Blocking receive co-process
gp := getg() mysg := acquireSudog() mysg.releasetime = 0 if t0 != 0 { mysg.releasetime = -1 } // No stack splits between assigning elem and enqueuing mysg // on gp.waiting where copystack can find it. mysg.elem = ep mysg.waitlink = nil gp.waiting = mysg mysg.g = gp mysg.selectdone = nil mysg.c = c gp.param = nil c.recvq.enqueue(mysg) goparkunlock(&c.lock, "chan receive", traceEvGoBlockRecv, 3)
There is no process waiting to be sent, there is no data in the cache, then blocking the receiving process, waiting for the appropriate time to receive the data.
As with the send process, the current coprocessor is encapsulated sudog in and linked to the recvq list. and hibernate the current co-process.
Summarize
- The channel must be initialized before it can be used
- After the channel is closed, data is not allowed to be sent, but it can continue to receive unhandled data from it. So try to close the channel from the sending side.
- Non-cached channel needs to be aware that operations in one process do not cause deadlocks
Legacy issues
hchanSizeThe calculation
maxAlignThe role of parameters
- Memory allocation
- The carding of design thought
Note 1: Source code based on go1.9.2
Note 2: The source code quoted in the article ... indicates a limitation