The concurrent concurrency of the Go Language Foundation

Source: Internet
Author: User

Concurrent concurrency

Many people are directed at Go hype high concurrency and can not help but, in fact, from the source of analysis, Goroutine is only by the official implementation of the Super "thread pool" just. But then again, the stack memory footprint of each instance 4~5kb and the significant reduction in creation and destruction overhead due to the implementation mechanism are the root causes of the high concurrency that Go claims to be made. In addition, Goroutine's ease of use also gives developers a huge traversal at the language level.

High concurrency must be noted: concurrency is not parallel.

Concurrency is mainly by switching time slices to achieve "simultaneous" operation, while parallel is the direct use of multi-core implementation of multi-threaded operation, but Go can set the use of the number of cores to play a multi-core computer processing capacity.

Goroutine pursues communication to share memory, rather than shared memory. The go language is mainly through Channe technology communication to achieve the sharing of memory, because the channel is a tunnel, Go is through the channel to communicate the sharing of memory data.

For beginners, Goroutine directly understand the thread as it should be. When you call go on a function and start a goroutine, it is equivalent to a thread that executes the function.

In fact, a goroutine is not the equivalent of a thread, and Goroutine's appearance is to replace the original thread concept into the smallest dispatch unit. Once Goroutine is run, the yourselves thread is searched first, and if the thread is blocked, it is allocated to the idle thread, and if there is no idle thread, a new thread is created. Note that when Goroutine executes, the thread is not recycled, but instead becomes an idle thread.

Let us first look at one of the simplest goroutine cases:

Package Mainimport (    "FMT"    "Time") Func main () {    //enable a goroutine    go GoRun ()    // Adding a sleep here is because the main thread has started to die, and the child thread is too late to execute time    . Sleep (2 * time. Second)}func GoRun () {    fmt. Println ("Go Go Go!!!")}

Operation Result:

1 Go Go Go!!!

Channel

1. Channel is the bridge of goroutine communication, mostly blocking synchronous

2. It is created by make, close closes

3. Channel is a reference type

4. You can use the for range to iterate and continue to operate the channel

5. One-way or two-way channel can be set

6. You can set the cache size so that it does not block until it is filled, that is, it is asynchronous

So for the upstream code we don't use hibernation, and we use the Channel to achieve the effect we want:

The meaning of the channel in the vernacular can be understood as follows: The main thread tells you that you can open goroutine, but I opened a pipeline in my main thread, you have done what you have to do, put something inside the pipe to tell me you have finished.

Package Mainimport (    "FMT") func main () {    //declaration creates a channel with a storage type of bool type    c: = Make (chan bool)    //Enable a goroutine, Using the anonymous method    , go func () {        fmt. Println ("Go Go Go!!!")        C <-true  //deposit a value into the channel    } ()    //When the program finishes executing, remove the value just assigned from the channel    <-C    /**    When the main thread initiates an anonymous child thread, it executes to: <-c, the main thread is blocked when it arrives here. The main thread blocks only when a child thread puts a value into the channel.    this is actually the message that was sent     */}

The upstream code can be modified to use a for range to send messages:

The package Mainimport (    "FMT") func main () {    //declaration creates a channel with a type bool, and the channel set here is a two-way channel, which can either be stored or taken    c: = Make ( Chan bool)    //enable a goroutine, using an anonymous method of way    go func () {        fmt. Println ("Go Go Go!!!")        C <-true  //deposit a value to the channel        Close (c)///Remember that if you use a for range to take a value, you need to close it somewhere else, or a deadlock will occur    } ()    // Loop through the channel to remove the value just assigned for    V: = range C {        FMT. Println (v)    }}

From the above code can be seen, the general use of channels are two-way channel, namely: can be taken and can be saved. What is the one-way channel commonly used in the scenario?

One-way channel is divided into two types, one is only readable, one is only stored, generally used for parameter type transfer. For example, there is a method to return a channel type, the general requirements of the operation can only be taken from here, then its purpose is only to store the type, if you do not care to save data at this time, the occurrence of panic causes the program to crash an exception. Then read the type of channel similarly. In fact, this is also for the security and robustness of the program, to prevent some mis-operation.

Here's another point of knowledge, the difference between a cached channel and a non-cached channel?

Make (chan bool, 1) indicates a cache channel with a cache size

Make (chan bool) or make (chan bool, 0) indicates a non-cached channel

The non-cached channel is blocking that is synchronous, while the cache channel is asynchronous. What do you say? Like what

C1:=make (chan int) unbuffered

C2:=make (Chan int,1) has a cushion

C1 <-1//toward no cache channel into data 1

Unbuffered not only to the C1 Channel 1 and must have another thread <-C1 took over this parameter, then C1 <-1 will continue, otherwise it has been blocked

The C2 <-1 does not block, because the buffer size is 1 only when the second value is placed before the first one has not been taken away, the time will be blocked.

  Make a metaphor.

No buffer is a messenger to your door to send letters, you are not at home he does not go, you must take the letter, he will go.

No buffer guarantee to get your hands on the letter.

There is a buffer is a messenger to your home still to your home mailbox turned away, unless your mailbox is full he must wait for the mailbox empty.

There is a buffer guarantee to enter your home mailbox.

If multiple threads concurrent preemption in a multithreaded environment would make printing not in order, how do we ensure that the main thread stops after all the child threads have finished? There are two main ways of doing this:

The first type: using Blocking channel

Package Mainimport (    "FMT"    "Runtime") func main () {    fmt. PRINTLN ("Current system core count:", runtime.) NUMCPU ())    runtime. Gomaxprocs (runtime. NUMCPU ())//Set the number of concurrency used by the current program execution    //define a blocking channel    c: = Make (chan bool)//    here start 10 threads to run for    I: =0; i < ten; i++ {        go GoRun (c, i)    }    We know a total of 10 cycles, then take 10 times here, then the child thread Gorun only after the main thread to complete, because here also loop 10 times, not enough words will be blocked for    I: = 0; i <; i++ {        <-C    }} Func GoRun (c chan bool, index int) {    A: = 1    //loop overlay 10 million times and return final result for    I: = 0; i < 10000000; i++ {        A + = I    }    fmt. PRINTLN ("line program number:", index, a)//to the blocking queue insert content    C <-true}

Printing results:

1234567891011 当前系统核数: 4线程序号: 9 49999995000001线程序号: 5 49999995000001线程序号: 2 49999995000001线程序号: 0 49999995000001线程序号: 6 49999995000001线程序号: 1 49999995000001线程序号: 3 49999995000001线程序号: 7 49999995000001线程序号: 8 49999995000001线程序号: 4 49999995000001

From the printing results can be seen, multi-threaded environment running code printing and order is not related to the CPU scheduling itself, run a few more times printing results will not be the same, that is the truth.

The second type: using the synchronization mechanism

Package Mainimport (    "FMT"    "Runtime" "    Sync") Func main () {    fmt. PRINTLN ("Current system core count:", runtime.) NUMCPU ())    runtime. Gomaxprocs (runtime. NUMCPU ())//sets the number of concurrent/** that the current program executes    waitgroup that is the task group, which is most useful for adding tasks that need to be worked on, marking the done once without completing a task, so that the task group's backlog will decrease by 1    Then the main thread is to determine whether there are unfinished tasks within the task group, when the main thread can end up running after the task is not completed, so that the synchronization function similar to the blocking queue is    created an empty waitgroup (Task Group)     */    WG: = Sync. waitgroup{}    WG.  Add 10 tasks to Task Group//    here start 10 threads run for    i: =0; i < x i++ {        go goRun (&WG, i)    }    Wg. Wait ()}/** here requires that the incoming reference type cannot pass in the value copy because it is necessary to perform a done operation in a child thread, similar to the one that we modify in the struct, the int variable subjects decrements, and if it is only copied, it will not affect the data in the original type so that a dead loop causes the deadlock program to collapse. Error exception: Fatal Error:all Goroutines is asleep-deadlock! */func GoRun (WG *sync. Waitgroup, index int) {    A: = 1    //loop overlay 10 million times and return final result for    I: = 0; i < 10000000; i++ {        A + = i    }
   
    fmt. PRINTLN ("line program number:", index, a)    WG. Done ()}
   

Printing results:

1234567891011 当前系统核数: 4线程序号: 1 49999995000001线程序号: 5 49999995000001线程序号: 0 49999995000001线程序号: 9 49999995000001线程序号: 4 49999995000001线程序号: 3 49999995000001线程序号: 2 49999995000001线程序号: 6 49999995000001线程序号: 8 49999995000001线程序号: 7 49999995000001

All of the above is based on a channel, so what should we do when we have multiple channel?

The Go language provides us with a structure named: SELECT, which is very similar to switch, and switch is used primarily for common types of judgments, while Select is judged by multiple channel.

Select

1. Can handle the transmission and reception of one or more channel

2. When there are multiple channel available, it can be processed in random order

3. You can use an empty select to block the main function

4. It can also set the time-out period

Case one: Use multiple channel to receive data:

Package Mainimport (    "FMT")/** data receive processing */func main () {    //Bulk Initialize channel    c1, C2: = make (chan int), make (Chan Strin g)    //Create an anonymous function to start Goroutine    go func () {        /**        create an infinite loop statement that is handled using select        we typically use this approach to handle constant message sending and processing         */            for {select ' {case            v ', OK: = <-C1:                if!ok {break                }                fmt. PRINTLN ("C1:", v) case            V, OK: = <-C2:                if!ok {                    break                }                FMT. Println ("C2:", V)}}}    ()    C1 <-1    C2 <-"Liang"    C1 <-2    c2 <-"Xuli"    //Close channel close    (C1) Close    (C2)}

Printing results:

1234 c1: 1c2: liangc1: 2c2: xuli

Case two: Use multiple channel to send data:

Package Mainimport (    "FMT")/** data reception processing, where the implementation randomly receives 0, 1 digits and prints */func main () {    c: = make (chan int)    num: = 0    //Created An anonymous function to start Goroutine    go func () {for        V: = range C {            num++            if num & = = 0 {                fmt. Println ()            }            fmt. Print (V, "")        }    } () for    {        Select {case        C <-0: Case        C <-1:        }    }}

Print Result: (Just paste part of it)

1234567891011121314151617 1 1 0 1 1 0 0 0 0 1 0 1 0 0 1 00 1 0 1 1 0 1 1 0 0 1 1 1 0 0 11 1 1 1 0 0 1 1 0 0 0 0 0 1 0 10 1 1 0 0 0 1 1 1 0 0 0 1 1 0 01 1 1 0 0 0 0 0 1 0 1 1 1 1 1 10 0 1 0 0 0 0 1 0 1 1 0 1 1 1 01 1 1 0 0 0 1 1 1 0 0 0 1 0 0 11 0 1 1 1 1 0 0 1 0 0 1 1 1 1 11 1 0 0 0 0 0 1 1 1 0 1 1 0 1 11 1 0 0 0 0 1 0 0 1 0 1 0 0 1 10 0 0 1 1 1 1 1 0 0 0 1 0 0 0 10 1 1 0 1 0 1 0 1 0 0 1 1 0 0 00 1 0 0 0 1 0 0 0 1 1 0 0 0 1 11 1 0 1 1 1 1 0 0 0 1 0 0 0 1 10 1 1 0 0 1 1 0 1 0 1 0 0 0 0 10 1 1 0 0 0 1 1 0 1 0 1 0 0 0 00 0 0 1 0 0 0 1 1 1 1 1 1 1 1 0

Case THREE: Set timeout time with channel:

Package Mainimport (    "FMT"    "Time")/**select Timeout Application */func main () {    c: = Make (chan bool)    Select {    Case V: = <-C:        FMT. Println (v) Case    <-time. After (3 * time. Second):        FMT. PRINTLN ("TimeOut!!!")}    }

Printing results:

1 TimeOut!!!

The concurrent concurrency of the Go Language Foundation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.