Golang: Do not communicate through shared memory, but should share memory through communication. This is a popular phrase in the Go community, which is said to be the channel in Goroutine ....He acts as a type-safe conduit in go concurrency programming.1, through the Golang in the Goroutine and sync. Mutex for concurrent synchronizationImport ("FMT" "Sync" "Runtime") var count int =0;func counter (lock * sync). Mutex) {lock. Lock () count++ FMT. PRINTLN (count) lock. Unlock ()}func main () {Lock:=&sync. mutex{} for i:=0;i<10;i++{//pass pointer is to prevent the lock within the function and call the lock inconsistent go counter (lock)} for{lock. Lock () C:=count lock. Unlock ()////give time slices to other goroutine to run the routine runtime at some point in the future. Gosched () if c>=10{fmt. Println ("Goroutine End") Break}} }
2, Goroutine communication through the channel, channel is related to the type can be understood as a type of safe pipeline. Simple channel UsePackage main import "FMT" func Count (ch chan int) {ch <-1 fmt. Println ("Counting")}func main () {CHS: = make ([]chan int, ten) for I: = 0; i <; i++ {chs[i] = make (chan int ) go Count (chs[i]) fmt. Println ("Count", i)}for i, ch: = Range CHS {<-ch FMT. Println ("Counting", I)}}3, the Go language select is the language level built-in non-cloggingSelect {Case <-CHAN1://If the CHAN1 successfully read the data, the case processing statementCase Chan2 <-1://If the data is successfully written to chan2, the case processing statementDefault//If none of the above is successful, enter the default processAs you can see, select does not look like a switch, followed by a judgment condition, but directly to the case statement. Each case statement must be a channel-oriented operation. In the example above, the first case attempts to read one data from Chan1 and ignores the read data directly, while the second one tries to write an integer number 1 to Chan2, and if neither succeeds, it reaches the default statement.
4, the channel with buffered read WriteWe have previously demonstrated that the channel is created without buffering, which is acceptable for scenarios that pass a single data, but it is not appropriate for scenarios where large amounts of data need to be transmitted continuously. Next we show how to buffer the channel to achieve the effect of Message Queuing. Creating a buffered channel is also very easy: c: = make (chan int, 1024) passes the buffer size as the second parameter when make () is called, such as the example above creates an int type channel of size 1024, Even if there is no reader, the writer can write to the channel all the time and will not block until the buffer is filled out. Reading data from a buffered channel can be used in a way that is exactly the same as a regular non-buffered channel, but we can also use the range key for a more convenient loop reading: For I: = Range C {fmt. Println ("Received:", i)}5, using Goroutine to simulate the production of consumersPackage Mainimport "FMT" import "Time" func Producer (queue chan<-int) {for i:= 0; i <; i++ { Queue <-I}}func Consumer (queue <-chan int) {for i: =0; i <; I++{V : = <-Queue fmt. Println ("Receive:", v)}}func main () {queue: = make (chan int, 1) Go Producer (queue) go consume R (Queue) time. Sleep (1E9)//Let producer and Consumer complete}
6. Create a channel through makeMake (C1 chan int) created a synchronous channel ... Read-write complete make (C1 chan int, 10) enters the buffered channel and can write 10 times.
7, Random to the channel write 0 or 1Package Mainimport "FMT" import "Time" Func main () {ch: = made (chan int, 1) for {//////non-stop writing to channel 0 or 1 Select { Case CH <-0:case ch <-1:}//Remove data from the Channel I: = <-ch fmt. Println ("Value Received:", I) time. Sleep (1e8)}}8. Channel with BufferPreviously created are non-buffered channel, which is acceptable for scenarios that pass a single data, but is not appropriate for scenarios where large amounts of data need to be transmitted continuously. Next we show how to buffer the channel to achieve the effect of Message Queuing. Creating a buffered channel is also very easy: c: = make (chan int, 1024) passes the buffer size as the second parameter when make () is called, such as the example above creates an int type channel of size 1024, Even if there is no reader, the writer can write to the channel all the time and will not block until the buffer is filled out. Reading data from a buffered channel can be used in a way that is exactly the same as a regular non-buffered channel, but we can also use the range key for a more convenient loop reading: For I: = Range C {fmt. Println ("Received:", i)}////////////////////////////////////////Below is the test code////////////////////////////////////Package Mainimport "FMT" import "Time" Func-A (c Chan int) {for i:=0;i<10;i++{c<-i}}func B (c chan int) {fo R val:=range C {fmt. Println ("Value:", Val)}}func main () {Chs:=make (Chan int,10)//If there is a channel operation must be placed in the goroutine otherwise it will block the current main thread and cause the program to exit// For synchronous channels or buffered channels, be sure to encapsulate the function using Goroutine wrapper go A (CHS) go B (CHS) time. Sleep (1E9)}
9, about creating multiple goroutine specific to the go language will create how many threadsImport "OS" Func main () { For i:=0; i<20; i++ { Go func () { for { B:=make ([]byte, 10) Os. Stdin.read (b)//would block } }() } select{} will produce 21 Threads: Runtime Scheduler (SRC/PKG/RUNTIME/PROC.C) maintains a thread pool, and when a goroutine is block, Scheduler will create a new thread to the other ready goroutineGomaxprocs Controls whether all goroutine that are not blocked are multiplex to the number of threads running
10, in the channel is also can pass channel, the Go Language channel and map slice, etc. are the same native typeIt is important to note that in the go language the channel itself is a native type, as is the case with map types, so the channel itself can be passed through the channel itself after it is defined. We can use this feature to implement the very common piping (pipe) characteristics on *nix. Pipelines are also used in a very wide range of design patterns, such as when processing data, we can use the pipeline design, it is easier to plug-in way to increase the data processing process. Below we use the features that channel can be passed to implement our pipeline. To simplify the expression, we assume that the data passed in the pipeline is just an integer number, which is usually a block of data in a real-world scenario. First limit the basic data structure: type pipedata struct {value int handler func (int) int next Chan int} then we write a regular handler function. As long as we define a series of pipedata data structures and pass them together to this function, we can achieve the purpose of the streaming process: Func handle (Queue Chan *pipedata) {for data: = Range Queue {DATA.N Ext <-Data.handler (data.value)}}
11, we create a two-way channel by default, one-way channel is meaningless, but we can convert the two-way channel into a one-way channel through the casting. var ch1 Chan int// ch1 is a normal channel, not one-way var CH2 chan<-float64//CH2 is a one-way channel, only used to write float64 datavar CH3 <-chan INT// CH3 is a one-way channel for reading int data only The channel is a native type and therefore not onlysupport is passed, and type conversions are supported. Only after introducing the concept of unidirectional channel can the reader understand that type conversion isThe meaning of the channel is to convert between one-way channel and two-way channel. Examples are as follows: CH4: = make (chan int) Ch5: = <-chan int (CH4)//Ch5 is a one-way read channelCH6: = chan<-int (CH4)//Ch6 is a one-way write channelBased on CH4, we initialized two one-way channel by type conversion: one-way read ch5 and one-way write Ch6.from a design standpoint, all code should follow the principle of least privilege.This avoids the unnecessary use of flooding, which in turn leads to runaway programs. Readers who have written C + + programs will certainly associate the constthe use of pointers. A non-const pointer has all the functions of a const pointer, and setting a pointer to const is explicitly tellingThe function implementation does not attempt to modify the pointer. One-way channel is also a contractual function. Let's take a look at the use of unidirectional channel: Func Parse (ch <-chan int) {For value: = Range ch {fmt. PRINTLN ("Parsing value", value)}} unless the implementation of this function shamelessly uses the type conversion, otherwise this function will not for various reasons and the CHwrite to avoid non-expected data in CH, so the principle of least privilege is well practiced.
12, read-only write one-way channel code example follow the principle of minimum permissionsPackage Mainimport "FMT" import "Time"//accepting a parameter is only allowed to read the channel unless directly cast or you can only read the data from the channelFunc sCh (ch <-chan int) {for val:= range ch {fmt. Println (Val)}}func main () {//Creates a channel with 100 buffers that can be written directly without causing the main thread to clog Dch:=make (Chan int,100) for i:=0;i<100;i++{ dch<-i}//pass in read-only channel go sCh (DCH) time. Sleep (1E9)}
13. The closing of the channel and the determination of the closing of the channelClose the channel is very simple, directly using the go language built-in close () function: Close (CH) after describing how to close the channel, we have one more question: How to tell if a channel has been closed? We can use multiple return values when reading: X, OK: = <-ch This usage is similar to the process of getting value from a key in map, just look at the second bool return value, and if the return value is False, the CH has been closed.
14, go multi-core parallelization programming high-performance concurrent programming must be setGomaxprocs is the maximum number of cores for this value by runtime. numcpu () get In performing some expensive computational tasks, we want to make the most of the multi-core features commonly available in modern servers to parallelize tasks as much as possible, thus reducing the overall computational time. At this point we need to understand the number of CPU cores, and specifically decompose the computational tasks into multiple goroutine to run in parallel. Let's simulate a completely parallel computing task: Calculate the sum of n integers. We can divide all the integers into m parts, and M is the number of CPUs. Let each CPU begin to compute the compute task assigned to it, and then add the calculation results of each CPU again, so that the sum of all n integers can be obtained: type Vector []float64//Compute task Func assigned to each CPU (v Vector) Dosome (i, n int, u Vector, C Chan int) {for; i < n; i++ {v[i] + u.op (V[i])} C <-1 Signal to the mission manager. I've done my calculations.}const NCPU = 16 Assuming there's a total of 16 cores Func (v vector) doall (U vector) { c: = make (chan int, ncpu)//To receive the task completion signal for each CPU For I: = 0; i < ncpu; i++ { Go V.dosome (I*len (v)/ncpu, (i+1) *len (v)/ncpu, U, c)}//wait for all CPU tasks to complete for I: = 0; i < ncpu; i++ { <-C//Gets to a data that indicates a CPU calculation is complete}This means that all calculations have ended} and the two functions look very reasonable. Doall () splits the task based on the number of CPU cores and then opens up multiple goroutine to perform these calculations in parallel. Can the total calculation time be reduced to the original 1/n? The answer is not necessarily. If you pinch the stopwatch (normal point, you should use the benchmark method described in section 7.8), you will find that the total execution time is not significantly shortened. To observe the CPU running state, you will find that although we have 16 CPU cores, but in the calculation process only one CPU core is busy, which will make many go language beginners confused problem. The official answer is that this is the current version of the go compiler that is not smart enough to discover and exploit the benefits of multicore. Although we do create multiple goroutine, and from a running state these goroutine are also running in parallel, all of these goroutine are actually running on the same CPU core, and when a goroutine gets time slices executed, Other goroutine will be in a waiting state. It can be seen from this that although goroutine simplifies the process of writing parallel code, the overall operational efficiency is not really higher than that of a single threaded procedure. Before the go language is upgraded to a version that supports multiple CPUs by default, we can control how many CPU cores are used by setting the value of the environment variable Gomaxprocs. The way to do this is by setting the value of the environment variable gomaxprocs directly, or by invoking the following statement before starting Goroutine in code to set up the use of 16 CPU cores: Runtime.gomaxprocs (16) How many CPU cores should be set? In fact, the runtime package also provides another function numcpu () to get the number of cores. As you can see, the go language has already sensed all of the environmental information, and in the next release it can be used to dispatch goroutine to all CPU cores, maximizing the server's multicore computing power. Abandoning Gomaxprocs is only a matter of time.
15, the initiative to sell time slices to other goroutine at some point in the future to perform the current GoroutineWe can control when the time slices are actively given to other goroutine in each goroutine, which can be implemented using the gosched () function in the runtime package. In fact, if you want more granular control over the behavior of goroutine, you must have a more in-depth understanding of the specific functionality provided by the runtime package in the Go Language development kit.
16. Sync in GoAdvocates use communication to share data, rather than sharing data to communicate, but given that even if the channel is successfully used as a means of communication, or to avoid the problem of sharing data among multiple goroutine, the Go language designer, while having high expectations for the channel, But it also provides a proper resource-locking scheme.
17. Sync Lock in GoAdvocates use communication to share data, rather than sharing data to communicate, but given that even if the channel is successfully used as a means of communication, or to avoid the problem of sharing data among multiple goroutine, the Go language designer, while having high expectations for the channel, But it also provides a proper resource-locking scheme. For both lock types, any lock () or rlock () must be guaranteed to correspond to a unlock () or Runlock () call, which may cause all goroutine waiting for the lock to be starved or even deadlock. The typical use mode of the lock is as follows: Var l sync. Mutex func foo () {L.lock ()the//delay call is called when the function exits and the local resource is releasedDefer L.unlock ()//...} Here we are once again witnessing the elegance of the Go language defer keyword
18, globally unique operation sync. Once.do () sync.atomic Atomic Operation Sub-packageFor code that only needs to run once from the global perspective, such as the global initialization operation, the go language provides a once type to guarantee global uniqueness operations, as follows: Var a string var once sync. Once func Setup () {a = "Hello, World"} func Doprint () {Once. Do (Setup) print (a)} func Twoprint () {go doprint () Go Doprint ()} If this code does not introduce once, Setup () will be called first by each goroutine, at least for this The example is superfluous. In reality, we are often faced with this situation. The Go Language standard library introduced the once type to solve this problem. The Do () method of the once guarantees that only the specified function is called once at the global scope (this is the setup () function), and all other goroutine are blocked until the global unique once is called to this statement. The Do () call ends before continuing. This mechanism is relatively lightweight to solve the use of other languages when developers have to design and implement this once effect of the problem, but also the go language for concurrency programming to do as much as possible to consider the embodiment. If there is no once. Do (), we are likely to be able to add only one global bool variable, and the bool variable is set to True at the last row of the function setup (). Before all calls to Setup (), it is necessary to determine if the BOOL variable has been set to true and if the value is still false, the setup () is called once, otherwise the statement should be skipped. Implementation code var done bool = Falsefunc Setup () {a = ' Hello, world ' do = true} Func Doprint () {if!done {setup ()} Print (a)} This code looks reasonable at first, but a closer look is problematic because Setup () is not an atomic operation that could cause the setup () function to be called multiple times, so that it cannot achieve a target that is only executed once globally. The complexity of the problem also reflects the value of the once type. To better control atomic operations in parallelism, the sync package also contains a atomic sub-package that provides atomic manipulation functions for some underlying data types, such as the following function: Func CompareAndSwapUint64 (Val *uint64, old, New UInt64) (swapped bool) provides an operation to compare and exchange two UInt64 type data. This allows developers to no longer have to add lock operations specifically for such operations.
Summary of Go language concurrency programming