About Goroutine and channel

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

About the principle of goroutine

The principle of the content is more, such as Goroutine when the start of the implementation of what the relevant operation, 1.1 points to add.

Fundamentals of Channel

Channel is a special mechanism in the go language, which can synchronize two concurrently executed functions and allow the two functions to communicate by passing certain types of values to each other. In fact, this is also the two main functions of channel.

It can be divided into buffer channel and non-buffered channel according to whether the channel has buffering value during initialization. When the channel is initialized, it still needs to be made, such as make (Chan int,10) declares a channel with a buffer space of 10 int, and directly makes (Chan int) declares a non-buffered channel

Directly using the built-in function close (Strchan), you can close the channel. The operation of the closed channel should be ensured under safe conditions. Basic principle: The built-in function len (Strchan) can see the number of elements currently in the channel Cap (Strchan) to see the total capacity of the channel, and once the total capacity is initialized, it will not change again.

    • No matter what should not be in the receiver to close the channel, because it is not possible to determine whether the sender has data to be sent, the channel has a good feature, that is, the receiver is closed channel, the receiving end can still normally accept the existing channel data. Who is the channel, who is responsible for the final off, is this truth.
    • Note element, OK: = <-chann of this syntax, if the channel is closed then the value of OK will change to the value of False,element will become the channel type of 0 value, usually with OK this syntax to determine whether to exit a loop.

such as the following code, but also to see the relevant usage mode of Goroutine:

Package Mainimport ("FMT" "Time") Func main () {ch: = Make (Chan int., 1) Sign: = Make (chan byte, 2) go func () {for i: = 0; I &lt ; 5; i++ {ch <-itime. Sleep (1 * time. Second)}close (CH) fmt. PRINTLN ("The channel is closed.") Sign <-0} () go func () {//This loop will always try to read the information from CH even though CH has been closed by the sender//But still can read the information out finally when OK is false to indicate that no data has been read from CH//jump out of the loop Note this way of judging for {fmt. Printf ("Before extract channel Len:%v,", Len (CH)) E, OK: = <-chfmt. PRINTF ("Channel value:%d If extract OK:(%v) after extraction channel Len:%V channel Cap:%v \ n", E, OK, Len (ch), Cap ( CH)) if!ok {break}time. Sleep (2 * time. Second)}fmt. Println ("done.") Sign <-1} ()//If you do not add two values, the main process will end up right here. This is equivalent to implementing a//synchronous operation waiting for two go func to end before ending the main process note this technique <-sign<-sign}/*output : Before extract channel Len:1, channel value:0 If extract OK:(true) after extraction channel len:0 channel CAP:1BEF Ore extract channel Len:1, channel value:1 If extract OK:(true) after extraction channel len:0 channel Cap:1before Extract channel Len:1, channel ValuE:2 If extract OK:(true) after extraction channel len:0 channel Cap:1before extract channel Len:1, channel Value:3  If extract OK:(true) after extraction channel len:0 channel Cap:1the channel is Closed.before extract channel Len:1 , channel Value:4 If extract OK:(true) after extraction channel len:0 channel Cap:1before extract channel len:0, CH Annel value:0 If extract OK:(false) after extraction channel len:0 channel cap:1done.*/

About the basic principles of the Channel

    • Channel buffer is full, and then send data to the channel, will cause goroutine blocking, the channel is not initialized, that is, when the value is nil, sending data to it will cause the channel to be permanently blocked.
    • The operation of the closed channel should be carried out by the sending side, after the channel is closed, if there is data, the receiving side can still accept the data normally.
    • Sends a value to the channel, and the value is passed

Channel Usage Scenario Analysis

Usage Scenarios (1)

Note that the App.go folder is about 346 lines from the beginning of the place to a pit note time. The return value of after is placed in the For loop so each time a new channel comes out and the way to jump out of the multilayer loop is noticed.
The main reference is "go concurrent programming in action" related content

The code is as follows:

package mainimport ("fmt""runtime")func main() {names := []string{"E", "H", "R", "J", "M"}for _, name := range names {go func() {fmt.Printf("Hello , %s \n", name)}()}//要是不添加runtime的话 就不会有内容输出//因为for循环执行速度太快了 直接循环结束跳出了最后的循环//之后 for循环中生成的5个go func 会被分别进行调度runtime.Gosched()}/* outputHello , M Hello , M Hello , M Hello , M Hello , M*/

According to the code, it can be seen that the go Func scheduler in the For loop is not scheduled to loop a go func at a time, and does not make any assumptions about the timing of the go Func execution.

Optimization scenarios

One way of thinking is to put the runtime. The gosched () function is placed at the end of each for loop, so that after each for loop, it will be re-dispatched once, may result in the correct results, not every time is accurate, such as the Go Func program needs to run for a period of time, during this run time, May have circulated several elements of the past

package mainimport ("fmt""runtime""time")func main() {names := []string{"E", "H", "R", "J", "M", "N", "O", "P"}for _, name := range names {go func() {time.Sleep(1000 * time.Nanosecond)fmt.Printf("Hello , %s \n", name)}()runtime.Gosched()}}/* output:Hello , EHello , JHello , JHello , PHello , PHello , P*/

There is another way of thinking is to use the method of passing parameters, is to give goroutine parameters, although the goroutine has been removed from the main function of the control, but it has been with the main function to give the brand, equivalent to a kind of decoupling feeling, for the loop will not affect the printing results , such as the following code:

package mainimport ("fmt""runtime""time")func main() {names := []string{"E", "H", "R", "J", "M", "N", "O", "P"}for _, name := range names {go func(who string) {time.Sleep(1000 * time.Nanosecond)fmt.Printf("Hello , %s \n", who)}(name)}runtime.Gosched()}/* output:Hello , EHello , HHello , RHello , JHello , M*/

However, this method is still very problematic, can only guarantee that the function execution time is very short, and do not output duplicate content, if the program execution time is longer, it is likely that the main function will be terminated prematurely, Are multiple goroutine generated sequentially in the CPU still scheduled to be executed in order? Is this still uncertain? Several goroutine will not be able to be dispatched and executed normally, such as the output of the above code, and the results of each output are indeterminate.

Usage Scenarios (2)

When the code encountered such a scenario, the service is created successfully, the need to wait for the IP to be assigned, the IP is allocated, the service is formally deployed successfully, and finally all the information back to the foreground, so it is intended to implement, after the successful creation of the service cycle, query IP if the allocation is successful return, If the time has passed and the return fails, the last part of the code looks like this:

//一个时间控制的channel//注意这个要在循环之外单独声明 否则每次都会分配一个新的 time.After的channel返回过来t := time.After(time.Second * 10)//注意这种跳出多层循环的操作方式 要是单层使用break的话 仅仅跳出的是 select 那一层的循环A:for {select {//如果时间到了 就返回错误信息case <-t:log.Println("time out to allocate ip")//delete the se which deploy faileda.Ctx.ResponseWriter.Header().Set("Content-Type", "application/json")http.Error(a.Ctx.ResponseWriter, `{"errorMessage":"`+"deploy error : time out"+`"}`, 406)break A//如果时间没到 就是 t 还没有发回信息 select语句就默认跳转到default块中//执行查找ip是否分配的操作default://log.Println("logout:", <-timeout)sename := service.ObjectMeta.Labels["name"]podslist, err := a.Podip(sename)if err != nil {log.Println(err.Error())a.Ctx.ResponseWriter.Header().Set("Content-Type", "application/json")http.Error(a.Ctx.ResponseWriter, `{"errorMessage":"`+err.Error()+`"}`, 406)break A}if len(podslist) == 0 {continue} else {log.Println("allocation ok ......")a.Data["json"] = detaila.ServeJson()break A}}}

Usage Scenarios (3)

Often there is such a scenario, some information from the old pool of resources, after some processing, and then into a new pool of resources, this process if the traditional way is to adopt a fully serial way efficiency will be very low, the granularity is too coarse, the specific granularity can be refined to each unit of resources for granularity.
For example, in the case of book p339, there is a resource pool that stores this person's information, takes each person out of it, then processes it, and then saves it to the new resource pool, where the old and new pools of resources are modeled with Oldarray and NewArray:

The specific code is as follows:

Package main//reference Go concurrent programming combat p337import ("log" "StrConv" "time") type person struct {name Stringage intaddr string}var Oldperso  Narray = [5]person{}var Newpersonarray = [5]person{}type Personhandler interface {Batch (origs <-chan person) <-chan Personhandle (orig *person)}//struct implements the Personhandler interface type Personhandlerimpl struct{}//receive information from Origs After processing, return to the new Channelfunc (handler Personhandlerimpl) batch (origs <-chan person) <-chan person {dests: = Make (chan Pers On, +) go func () {for {p, OK: = <-origsif!ok {close (dests) Break}handler. Handle (&p) log. Printf ("Old value:%v\n", p)//time. Sleep (time. Second) dests <-P}} () return dests}//here to use reference passing func (handler Personhandlerimpl) Handle (orig *person) {orig.addr = "new A Ddress "}func Getpersonhandler () Personhandler {return &personhandlerimpl{}}//print the Oldpersonarray into the Chan <-personfunc Fetchperson (origs chan<-person) {For _, V: = Range Oldpersonarray {//fmt. Printf ("Get the value:%v%v \ n", K, V) time. Sleep (time. Second) origs <-v}close (origs)}//fetch the value from the channel and store it into the Newpersonarrayfunc Saveperson (dest & Lt;-chan person) <-chan int {Intchann: = make (chan int) go func () {index: = 0for {p, OK: = <-destif!ok {break}//time . Sleep (time. Second) log. Printf ("New value transfer%v \ n", p) newpersonarray[index] = Pindex++}intchann <-1} () return Intchann}func init () {//Use Range is the value passed here to assign the Oldpersonarray to come in Tmplen: = Len (Oldpersonarray) for I: = 0; i < Tmplen; i++ {oldpersonarray[i].addr = "old address" oldpersonarray[i].age = Ioldpersonarray[i].name = StrConv. Itoa (i)}log. Printf ("First print init value:%v\n", Oldpersonarray)}func main () {handeler: = Getpersonhandler () Origs: = Make (chan Pers On, dests: = Handeler. Batch (Origs)//go func () {Fetchperson (Origs)} ()//Do not add go func words to wait for this sentence to execute the next sentence//Then the Orgis information is output completely shut down this time the statement from the Dest receive information to start execution/ /So does not dynamically output this sentence plus go func will not be 1s dynamic output//If you put the Fetchperson forward, then the old value will not be output dynamically Fetchperson (origs) Sign: = Saveperson (dests ) <-signlog. PrIntf ("Last Print new value:%v \ n", Newpersonarray)} 

The overall structure diagram is as follows:

Basic Code Analysis:

    • A Personhandler interface is declared first, and a struct Personhandlerimpl is then declared to implement the two methods in the interface, and the INIT function is used for Oldarray initialization. Note that in order to reduce errors, the internal function is a one-way channel when the party declares.
    • Fetchperson from the Oldarray central data, and the data stored in the Origs channel, note that after the end of the data to the channel, the sender will close the channel, otherwise it may cause deadlock. Note in the main function, if the Fech operation is not put into a goroutine to execute, it is still serial, equivalent to the data are put into the channel, the other end to start fetching, did not play the advantages of concurrency.
    • The 3,4 batch function takes the person information out of the origs, processes it, uploads it to dests, and finally returns dests, noting that it is not all passed in before the dests is returned, but a new goroutine is started to perform the incoming operation. Return the dests at the same time, paying attention to actively closing the channel.
    • 5 Saveperson operation receives a <-chann after receiving the person information, writes the value to the new resource pool, and after the end of all writes, passes a sign channel to the main process and ends.
    • In summary, Goroutine is often used in conjunction with channel when it is necessary to dynamically output information. The most common usage is that a goroutine is responsible for writing data to the channel, then returning the channel, and extracting the information from other processes. For example, some of the previously written websocket receive information from the foreground, background processing information and then dynamically return to the foreground to play the results of the model, and this almost, in short, the specific asynchronous execution process to clarify, what channel, is responsible for the message is what.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.