Golang Concurrent Programming Goroutine+channel (i)

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

The go language is designed to reduce complexity without compromising program performance, and another goal is to maximize the concurrency and code readability of your program under today's extensive Internet computing. The Concurrency keyword "Go" in the Go language

go dosomething() //走,兄弟我们搞点事情

Case ONE: Concurrent programming

func say(s string) {    fmt.Printf("%s say\n", s)}func main() {    go say("lisi")    say("zhangsan")}

Execution results

zhangsan say

The above case performed 2 say methods, but only Zhangsan executed successfully. The reason is because the Lisi is open a goroutine to execute, not finished but at this time the main function has exited.

Case TWO: Concurrent programming

Lisi estimate is a bit shy, speak slowly, so we have to wait Lisi, throw away serial execution and sleep outside we use a message pipeline class notification, here we will Zhangsan and Lisi say

func say(s string, c chan int) {    fmt.Printf("%s say\n", s)    c <- 1 //在消息管道里传1,代表我已经说过了}func main() {    c := make(chan int)    go say("lisi", c)    go say("zhangsan", c)    v1, v2 := <-c, <-c       fmt.Printf("lisi:%d , zhangsan:%d\n", v1, v2)}

The results are as follows, and of course it is possible to Lisi say in front of Zhangsan say, equal to 1 means they have spoken.

zhangsan saylisi saylisi:1 , zhangsan:1

Process decomposition
1. Create a non-buffered channel
2, asynchronous execution go say ("Lisi", c)
3, asynchronous execution go say ("Zhangsan", c)
4, assuming Zhangsan first execution, then Zhangsan 1 first into the pipeline C, if this time exactly Lisi in the execution, I am sorry pipeline C only 1 length can not put. At this point lisi:c<-1 is blocked
5, V1: = <-C execution, the value of Zhangsan 1 from the pipe out.
6, Lisi execute C <-1
7, V2: = <-C execution, the value of Lisi 1 is also taken out of the pipe
8. Implementation of FMT. Printf

Concurrent programming is so used, but we found that the problem is not, process decomposition step 4 is blocked, the same time 5 and 7 is also blocked (waiting for the pipeline to hold the value delayed)
Appropriate revision, as follows:

func say(s string, c chan int) {    fmt.Printf("%s say\n", s)    c <- 1 //在消息管道里传1,代表我已经说过了}func main() {    c := make(chan int, 2)  //改动点,管道长度设成了2    go say("lisi", c)    go say("zhangsan", c)    v1, v2 := <-c, <-c    fmt.Printf("lisi:%d , zhangsan:%d\n", v1, v2)}

This is the process of decomposition
1. Create a channel with a buffer of 2
2, asynchronous execution go say ("Lisi", c)
3, asynchronous execution go say ("Zhangsan", c)
4, assuming Zhangsan first execution, then Zhangsan 1 first into the pipeline C, if this time exactly Lisi in execution, Lisi 1 also into the pipeline C
5, V1: = <-C execution, the value of Zhangsan 1 from the pipe out.
6, V2: = <-C execution, the value of Lisi 1 is also taken out of the pipe
7. Implementation of FMT. Printf

In theory it should be a step down, and the actual situation might be better, since step 4 is not blocked (that is, the value of Zhangsan and Lisi 1 can be put in at the same time).

Steps 5 and 6 are blocked (the blocking here is a meaning to await in C #), but once the value of pipeline C is taken out immediately, the V1 and V2 are valued and then the FMT is executed. Printf

It's a problem again!

    1. What if the Say method has a return value? The following code case description

      func say(s string) int {fmt.Printf("%s say\n", s)return 1}func main() {msg:= go say("lisi", c)  //PS:这里会报错syntax error: unexpected go, expecting expression}
    2. This chan can only pass int or string if my return is just a struct structure body (entity) what to do?
    3. What if the Say method is written by someone else, and his parameters do not have Chan pipe and I want to do it concurrently?

Just look at the code.

package mainimport (    "fmt")//学生结构体(实体)type Stu struct {    Name string    Age  int}func say(name string) Stu {    fmt.Printf("%s say\n", name)    stu := Stu{Name: name, Age: 18}    return stu}func main() {    c := make(chan int)    go func() {        stu := say("lisi") //返回一个学生实体        fmt.Printf("我叫%s,年龄%d\n", stu.Name, stu.Age)        c <- 1 //信号位表示调用完毕    }()    fmt.Println("go func")    <-c    fmt.Println("end")}

Execution Result:
Go func
Lisi say
My name Lisi, age 18.
End

Error Demonstration: Deadlock

func say(s string, c chan int) {    fmt.Printf("%s say\n", s)    //c <- 1 这里本来应该给c管道传值的,结果没传}func main() {    c := make(chan int)    go say("lisi", c)    v1 := <-c //这里会一直阻塞,导致死锁    fmt.Printf("lisi:%d\n", v1)  //前面死锁,这里无法输出}

Execute error Content:

Fatal Error:all Goroutines is asleep-deadlock!

Brief analysis of Goroutine:

Goroutine also called the Association is a lightweight user space thread, not by the operating system scheduling, so users need to self-dispatch (generally locking and channel), the process can do things and threads can do the same thing. Process and thread switching is mainly dependent on the control of the time slice, and the transition of the co-operation is mainly dependent on itself, the benefit is to avoid meaningless scheduling, which can improve performance, but also therefore, the programmer must bear the responsibility of scheduling

What is a process: from Wikipedia
As with subroutines, the Coroutine is also a program component. The process is more general and flexible relative to subroutines, but in practice it is as broad as no subroutines. The association originates from the Simula and Modula-2 languages, but there are other language support

PS: Subroutine is part of the code of a main program

Goroutine can be regarded as the implementation of the go language of the association, it is natively supported by the language, compared to the general way of implementation by the library, Goroutine more powerful, its scheduling is to a certain extent by the Go Runtime (runtime) management. One of the benefits is that when a goroutine is blocked (such as a synchronous IO operation), the CPU is automatically assigned to the other goroutine.

Later, you can refer to the relationship between the process, the thread, the association, or the following article

    1. Processes, threads, lightweight processes, co-goroutine, and go
    2. How is Golang's goroutine implemented?

A brief analysis of channel

The channel is the way that the go language communicates between the goroutine provided at the language level. We can use the channel to pass messages between two or more goroutine. Channel is the process of communication, so the process of passing objects through the channel and call the function of the parameter transfer behavior is more consistent, such as can also pass pointers and so on. If you need to communicate across processes, we recommend a distributed system approach, such as using a socket or HTTP communication protocol. The go language also has a very good support for the network aspect. Channel is type-dependent. In other words, a channel can only pass a value of one type, which needs to be specified when declaring the channel. If you know something about UNIX pipelines, it is not difficult to understand the channel, which can be considered a type-safe conduit.

A detailed understanding of the channel is necessary. can refer to
Channel use of Golang

The point is, Goroutine claims to be easy to open tens of thousands of concurrent

 Package Mainimport ("FMT" "Time") var sum int = 0func TODO (i int, C Chan int) {//c <-1//execute once to put a value 1 c <-I//Put the value of I in}func getsum (count int, C Chan int, ce Chan int) {for i: = 0; I <= count; i++ {sum + = &L T;-c//k, IsOpen: = <-c//If!isopen {//FMT. PRINTF ("channel is Close")//Break//} else {//FMT. Printf ("sum:%d,k:%d\n", Sum, k)//sum + = k//}} CE <-1}func main () {count: = 100000//10w goroutine C: = make (chan int, count)//buffered channel CE: = Do (chan int)//Calculate getsum semaphore Start timing begin: = time. Now () Fmt. Println ("Start time:", begin) for I: = 0; I <= count; i++ {Go todo (i, c)}//Open a goroutine to calculate the value in the channel sum go getsum (count, C, CE) <-CE//Here is the Getsum method Line End Semaphore End: = time. Now () Fmt. Println ("End:", end, time.) Since (begin)) FMT. Println (SUM)} 

Hardware information
Environment: THINKPAD L460, win7x64, 8G memory, I5-6200U 2.3GHz dual Core 4 threads
Language: Liteide X33, Golang 1.9.2

Multiple execution results: between 38.5ms-51ms

Re-revision under

Change c: = made (chan int, count) to C: = make (chan int) to unbuffered

c := make(chan int)    //重点,这里改成无缓冲的

Multiple execution results: between 304-325ms

Conclusion: Obviously no buffer time is more than 300ms, this part of the time is actually the channel read blocking time, so in a large number of concurrent cases, the channel buffer size will directly affect the performance of the program, which is also mentioned above the need to self-dispatch one of the reasons!!!

By the way, the concurrent Code experiment for. NET core, and the same machine and environment as above Goroutine

class Program    {        private static readonly object obj = new object();        static void Main(string[] args)        {            DateTime begin = DateTime.Now;            long sum = 0;            Parallel.For(1, 100001, (i) =>            {                lock (obj)                {                    sum += i;                }            });            TimeSpan ts = DateTime.Now - begin;            Console.WriteLine($"{sum},耗时:{ts.TotalMilliseconds}ms");            Console.ReadLine();        }    }

Running results: Around 90-120ms, although the value of about twice times, in fact, the difference is not very large, there is no direct comparability, because the thread and the process is not a number of levels, the above Goroutine used the channel channels, net core lock lock, Therefore, it is for reference only. Overall, the overall performance of. NET core is pretty high.

PS: In fact, C # also has a co-process "fiber", on-line information less understanding.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.