Go language Development (ix), GO language concurrent programming

Source: Internet
Author: User

Go language Development (ix), go language concurrent programming one, Goroutine introduction 1, Concurrency and parallelism introduction

Parallel (parallel): means that at the same time, multiple instructions are executed simultaneously on multiple processors.
Concurrency (concurrency): means that only one instruction can be executed at the same time, but multiple process instructions are executed quickly, making the effect of multiple processes executing simultaneously on a macro level, but not at the same time in the micro, but in a number of segments, so that multiple processes can be executed quickly alternately.
Parallelism exists in multiprocessor systems, and concurrency can exist in both single-processor and multiprocessor systems, and concurrency is the illusion of parallelism in a single-processor system, where parallelism requires programs to perform multiple operations at the same time, while concurrency only requires the program to pretend to perform multiple operations at the same time (one action per small slice , multiple operations quickly switch execution).

2, Coroutine Introduction

Coroutine (co-process) is a user-state lightweight thread that features the following:
A, lightweight threading
B, non-preemptive multi-task processing, by the co-operation of the initiative to surrender control rights.
C, compiler/Interpreter/virtual machine level tasks
D, multiple processes may run on one or more threads.
E, subroutine is a special case of the association process.
Support in different languages for co-processes:
A, C + + through Boost.coroutine to achieve the support of the association Process
B, Java does not support
C, python through the yield keyword implementation of the process, Python3.5 began to use the async def to support the native association

3, Goroutine Introduction

In the go language, just add the keyword go before the function call to create a concurrent task unit, and the new task is put into the queue waiting for the scheduler to schedule it.
When the process is started, a main thread is created, and at the end of the main thread, the program process terminates, so that the process has at least one of the threads. In the main function, the main thread must wait to make sure that the process is not terminated.
Concurrency in the Go language refers to the ability to run a function independently of other functions, a goroutine is a separate unit of work, and the runtime of Go runs on a logical processor to dispatch Goroutine to run, a logical processor binds an operating system thread, So goroutine is not a thread, it is a co-process.
Process: A program that corresponds to a separate program space
Threads: One execution space, one process can have multiple threads
Logical Processor: Executes the created goroutine, binds a thread
Scheduler: In the Go runtime, assign goroutine to different logical processors
Global Run queue: All newly created goroutine queues
Local Run queue: Goroutine queue for logical processors
When a goroutine is created, it is stored in the Global runtime queue, waiting for the Go Runtime Scheduler to dispatch, the Goroutine is assigned to one of the logical processors, and placed in the logical processor corresponding to the local run queue, and eventually wait for the logical processor to execute.
Go concurrency is the way to manage, schedule, and execute goroutine.
By default, go defaults to assigning a logical processor to each available physical processor.
You can use runtime at the beginning of the program. Gomaxprocs (n) sets the number of logical processors.
If you need to set the number of logical processors, you typically set the following code:
Runtime. Gomaxprocs (runtime. NUMCPU ())
For concurrency, the Go language itself implementation of the scheduling, for parallel, with the physical processor of the number of cores, multi-core can be parallel concurrency, single core can only concurrency.

4. Goroutinue Use Example

In the go language, just add the keyword go before the function call to create a concurrent task unit, and the new task is put into the queue waiting for the scheduler to schedule it.

package mainimport (   "fmt"   "sync")func main(){   var wg sync.WaitGroup   wg.Add(2)   go func() {      defer wg.Done()      for i := 0; i < 10000; i++ {         fmt.Printf("Hello,Go.This is %d\n", i)      }   }()   go func() {      defer wg.Done()      for i := 0; i < 10000; i++ {         fmt.Printf("Hello,World.This is %d\n", i)      }   }()   wg.Wait()}

Sync. Waitgroup is a count of the semaphore, so that main function is the main thread waiting for the completion of two goroutine execution and then end, or two goroutine is still running, the main thread has ended.
Sync. Waitgroup is very simple to use, use the Add method to set the counter to 2, after each goroutine function, call the Done method minus 1. The Wait method indicates that if the counter is greater than 0, it will block, and the main function will wait for 2 goroutine to complete before ending.

5, the essence of Goroutine

Goroutine is a lightweight thread that consumes very little resources (go to set the size of each goroutine stack by default to 2k) thread switching is controlled by the operating system, and Goroutine switching is controlled by the user.
Goroutinue is essentially a co-process.
? Goroutinue can be implemented in parallel, where multiple goroutinue can run concurrently on multiple processors, while the co-process can only run on one processor at a time.
The communication between Goroutine is through the channel, while the communication of the co-process is through yield and resume () operation.

Second, goroutine scheduling mechanism 1, thread scheduling model

High-level language encapsulation implementations of kernel threads typically have three thread scheduling models:
A, N:1 model. n User-space threads run on 1 kernel-space threads, with the advantage that context switches are very fast but cannot take advantage of multicore systems.
B, 1:1 models. 1 Kernel-Space threads run a user-space thread, taking advantage of the benefits of multicore systems, but the context switch is very slow because each schedule switches between the user state and the kernel state.
C, m:n model. Each user thread corresponds to multiple kernel space threads, and a kernel space thread can correspond to multiple user-space threads, using any kernel model to manage any goroutine, but the disadvantage is the complexity of scheduling.

2, Go Scheduler introduction

Go's minimum dispatch unit is goroutine, but the operating system's smallest scheduling unit is still a thread, so the Go Scheduler (Go Scheduler) to do is how to put a lot of goroutine on a limited number of threads for efficient and fair scheduling.
Operating system scheduling is efficient and fair, such as the CFS scheduling algorithm. The core reason that go introduces Goroutine is Goroutine lightweight, whether from process to thread or from thread to Goroutine, whose core is to make the dispatch unit more lightweight, easily create tens of thousands of hundreds of thousands of goroutine without worrying about memory exhaustion and so on. Go introduces Goroutine to try to simplify programming as much as possible at the core layer of the language to achieve high performance at the same time (taking advantage of multicore advantages, using Epoll to efficiently handle network/io, and implementing garbage collection Mechanisms).

3, Go Scheduler implementation principle

? Go 1.1, Go Scheduler implementation of the M:N g-p-m thread scheduling model, that any number of user-state goroutine can run on any number of kernel space thread thread, not only can make on-line text switching more lightweight, but also take advantage of multi-core advantage.

To implement the M:N thread scheduling mechanism, go introduces 3 structures:
M: Kernel space thread for operating system
G:goroutine object, the G struct contains the stacks and instruction pointer (IP instruction pointers) needed to schedule a goroutine, and other important scheduling information. A G object is created each time the go is called.
P:processor, scheduling context, implementation of the M:N scheduling model of the key, M must get p to dispatch G, p limit the Go scheduling goroutine maximum concurrency. Each running m must bind a p.
The number of P is gomaxprocs (maximum 256), fixed at startup, generally not modified; The number of M and p is not necessarily the same (there will be dormant m or not too much m); Each p holds the local G task queue and can also use the Global G task queue.

The Global G task queue is exchanged with each local G task queue according to a certain policy.
P is saved with a global array (255) and maintains a global p idle list.
Each time you call go, you will:
A, create a G object, join the local queue or the global queue
B, if there is an idle p, create a M
C, M will start a bottom-level thread that loops through the G task that can be found
D, G task execution order is first from the local queue, not local from the global queue (one-time transfer (global G number/P number), and then go to other p (half-time transfer).
E, g task execution is performed according to queue order (that is, the order in which go is called).
The process of creating an M is as follows:
A, first find a free p, if not the direct return.
B, call the system API to create threads, different operating system calling methods are not the same.
C. Loop the G task in the created thread
If a system call or the G task executes too long, the kernel space thread is occupied, and the other G tasks are blocked because the local queue's G task is executed sequentially. Therefore, when the Go program starts, it creates a thread Sysmon, which is used for monitoring and management, and Sysmon is a loop inside:
A, log all P's G task Count Schedtick,schedtick will increment after each G-task execution.
B, if checked to? Schedtick has not been incremented, stating that P has been performing the same G task, if more than a certain amount of time (10ms), in the G task stack information inside a tag.
C, G when the task is executed, if it encounters a non-inline function call, it checks the token once, then interrupts itself, adds itself to the end of the queue, and executes the next G.
D, if there is no non-inline function (sometimes the normal small function is optimized to an inline function) call, will continue to perform the G task until goroutine itself end, if the goroutine is a dead loop, and Gomaxprocs=1, blocking.

4. Preemptive scheduling

Go has no concept of time slices. If a G does not make system call calls, does not perform I/O operations, does not block on a channel operation, M pauses the long task G with a preemptive dispatch and dispatches the next G.
The Go runtime has the chance to preempt G, except for extreme infinite loops or dead loops, as long as G calls the function. When the Go program starts, go runtime launches an m called Sysmon (commonly called a monitoring thread), and Sysmon can run without binding p. Sysmon is a thread that was created when the Go program was started to monitor management.
Sysmon starts every 20us~10ms, Sysmon mainly completes the following tasks:
A, releasing span physical memory that is idle for more than 5 minutes;
B, if more than 2 minutes without garbage collection, enforcement;
C, add long-time unhandled netpoll results to the task queue;
D, to the long-running G task to issue preemption scheduling;
E. Recover the p that has been blocked for a long time due to syscall;
If a G task runs 10ms,sysmon it will assume that it has been running for too long and issue a preemptive dispatch request. Once G's preemption flag bit is set to true, the runtime can preempt G and move out of the running state into the local runq of P, waiting for the next time to be dispatched when the function or method is called.

Third, runtime package 1, gosched

Runtime. Gosched () is used to give up the CPU time slice, let out the current Goroutine execution permission, the scheduler schedules other waiting tasks to run, and resumes execution from that location at a later time.

2, Goexit

Call runtime. Goexit () will immediately terminate the current Goroutine, the scheduler ensures that all registered defer deferred calls are executed.

3, Gomaxprocs

Call runtime. The Gomaxprocs () is used to set the maximum number of CPU cores that can be computed in parallel and returns the value before the setting.

Iv. Channel channels 1, introduction of channel

The channel is the goroutine of communication between the goroutine for sending messages and receiving messages between them. A channel is a reference type of data that can be used as a parameter or as a return value.

2. Creation of Channel

Channel declarations using the CHAN keyword, channel creation requires specifying the type of data sent and received in the channel.
Use make to create a channel:

var channel chan int = make(chan int)// 或channel := make(chan int)

Make has a second parameter that specifies the size of the channel.

3. Operation of Channel
//发送数据:写channel<- data//接收数据:读data := <- channel

Close channel: The sender closes the channel for notifying the receiver that no data has been
When the channel is closed, the other Goroutine access channel obtains the data, obtains the 0 value and False
Conditional End Dead Loop:

for{   v ,ok := <- chan   if ok== false{      //通道已经关闭。。      break   }}
//循环从通道中获取数据,直到通道关闭。for v := range channel{   //从通道读取数据}

The Channel usage example is as follows:

package mainimport (   "fmt"   "time")type Person struct {   name string   age uint8   address Address}type Address struct {   city string   district string}func SendMessage(person *Person, channel chan Person){   go func(person *Person, channel chan Person) {      fmt.Printf("%s send a message.\n", person.name)      channel<-*person      for i := 0; i < 5; i++ {         channel<- *person      }      close(channel)      fmt.Println("channel is closed.")   }(person, channel)}func main() {   channel := make(chan Person,1)   harry := Person{      "Harry",      30,      Address{"London","Oxford"},   }   go SendMessage(&harry, channel)   data := <-channel   fmt.Printf("main goroutine receive a message from %s.\n", data.name)   for {      i, ok := <-channel      time.Sleep(time.Second)      if !ok {         fmt.Println("channel is empty.")         break      }else{         fmt.Printf("receive %s\n",i.name)      }   }}

The results are as follows:

Harry send a message.main goroutine receive a message from Harry.receive Harryreceive Harryreceive Harrychannel is closed.receive Harryreceive Harrychannel is empty.

The Go runtime system does not immediately turn false as the second result of the corresponding receive operation when the channel channels are closed, but waits until the receiving end has received all the element values in the channel before doing so, ensuring that the channel security is turned off on the sending side.
Closed channels prohibit data inflow, are read-only, and can still fetch data from the closed channel, but no longer write data.
Send data to a nil channel, causing permanent blockage?; receives data from a nil channel, causing permanent blocking. Send data to an already closed channel, causing panic?;
Receives data from a closed channel, returns the cached value in the cached channel, and returns 0 if no cache is in the tunnel.

4. No buffer channel

When make creates a channel, the default is no second parameter, and the channel size is 0, which is called a unbuffered channel.
unbuffered channels are the size of the channel 0, that is, the channel does not have the ability to save any value before it is received, the Unbuffered channel sends Goroutine and the receiving gouroutine must be synchronous, if not at the same time, the first operation will block wait until another corresponding operation is ready. Unbuffered channels are also known as synchronous channels.
unbuffered channels never store data and are only responsible for the flow of data. From the buffer-free channel to fetch data, you must have data flow in, or the current goroutine will block, the data into the unbuffered channel, if there is no other goroutine to take away the data, then the current goroutine will block.

package mainimport (   "fmt")func main() {   ch := make(chan int)   go func() {      var sum int = 0      for i := 0; i < 10; i++ {         sum += i      }      //发送数据到通道      ch <- sum   }()   //从通道接收数据   fmt.Println(<-ch)}

After calculating sum and the goroutine is not executed, the value is assigned to the CH channel before the FMT. Println (<-ch) will always block the wait, the main function of the primary goroutine will not terminate, only when the calculation and the Goroutine is complete, and sent to the CH channel operation is ready, the main function of the <-ch will receive the calculated value, and print it out.
Sending and reading data without a cache channel cannot be placed in the same process, preventing deadlocks from occurring. Typically, a goroutine is created first to operate on the channel, at which time the newly created goroutine blocks and then the reverse operation of the channel in the main goroutine, enabling the Goroutine to be unlocked, i.e. the goroutine must be goroutine before it is unlocked.

5. With Buffer channel

When make creates a channel, it is called a buffered channel when the channel size is specified.
With a cache channel, as long as the channel cache is not satisfied, you can always send data to the channel until the cache is full; As long as the channel cache is not 0, the data can be read from the channel until the channel's cache becomes 0 blocks.
With no cache channel, with cache channel is not easy to cause deadlock, can be at the same time in a goroutine ease of use.
With the cache channel not only can flow data, but also can cache data, when the cache channel reached full state when it will block, when the cache channel can no longer carry more data.
The Cache channel is FIFO.

6, one-way channel

For some special scenarios, you need to restrict a channel to receive, not send, restrict a channel to send only, not receive. Channels that can only be received or sent in one Direction are called unidirectional channels.
Defining a one-way channel only needs to be <-at the time of definition.

var send chan<- int //只能发送var receive <-chan int //只能接收

The position of the <-operator can only be sent at a later point, corresponding to the send operation; the position of the <-operator can only be received in the front, corresponding to receive operations.
One-way channels are typically used for parameters of functions or methods.

V. Channel application 1, broadcast function realization

When a channel is closed, all read Goroutine to this channel will exit blocking.

package mainimport (   "fmt"   "time")func notify(id int, channel chan int){   <-channel//接收到数据或通道关闭时退出阻塞   fmt.Printf("%d receive a message.\n", id)}func broadcast(channel chan int){   fmt.Printf("Broadcast:\n")   close(channel)//关闭通道}func main(){   channel := make(chan int,1)   for i:=0;i<10 ;i++  {      go notify(i,channel)   }   go broadcast(channel)   time.Sleep(time.Second)}
2. Select Use

Select is used to listen and send and receive messages at the same time on multiple channel, when any one case satisfies the condition and executes if no executable case is executed, and if no default case is specified, the program is blocked. The syntax for select is as follows:

select {case communication clause :   statement(s);case communication clause :   statement(s);   /*可以定义任意数量的 case */default : /*可选 */   statement(s);}

In select multiplexing:
A, each case must be a communication
B, all channel expressions will be evaluated
C, all the sent expressions are evaluated
D, if any one of the communication can be carried out, it executes; others are ignored.
E, if more than one case can be run, select randomly and fairly chooses an execution. Others will not execute.
F, otherwise, if there is a default clause, the default statement is executed. If there is no default clause, select blocks until a communication can run, and go does not re-evaluate the channel or value.

Package Mainimport ("FMT" "Time") func doWork (channels *[10]chan int.) {for {select {case x1: = <-chan Nels[0]: FMT. PRINTLN ("Receive X1:", x1) Case x2: = <-channels[1]: FMT. Println ("Receive x2:", x2) Case x3: = <-channels[2]: FMT. PRINTLN ("Receive x3:", x3) Case x4: = <-channels[3]: FMT. Println ("Receive x4:", x4) case x5: = <-channels[4]: FMT. Println ("Receive X5:", X5) Case x6: = <-channels[5]: FMT. Println ("Receive X6:", X6) Case X7: = <-channels[6]: FMT. Println ("Receive X7:", x7) case x8: = <-channels[7]: FMT. Println ("Receive x8:", x8) case x9: = <-channels[8]: FMT. Println ("Receive x9:", x9) case x10: = <-channels[9]: FMT. Println ("Receive x10:", x10)}}}func main () {var channels [10]chan int go doWork (&channels) for I: = 0; I < 10; i++ {Channels[i] = make (chan int,1) channels[i]<-I  } time. Sleep (time. SECOND*5)}

The results are as follows:

receive x4:  3receive x10:  9receive x9:  8receive x5:  4receive x2:  1receive x7:  6receive x8:  7receive x1:  0receive x3:  2receive x6:  5
Six, deadlock

A deadlock in the Go program means that all goroutine are waiting for the release of the resource.
In general, the error message for deadlocks is as follows:
fatal error: all goroutines are asleep - deadlock!
Goroutine deadlocks occur for the following reasons:
A, only in a single goroutine operation unbuffered channel, must deadlock
B, non-buffered channel if there is no flow of inflow, or outflow without inflow, will lead to deadlock
Therefore, the methods for resolving deadlocks are:
A, take the unbuffered channel data or send data to the unbuffered channel
B. Using Buffer Channel

Go language Development (ix), GO language concurrent programming

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.