This is a creation in Article, where the information may have evolved or changed.
If it weren't for my quest for really parallel threads, I wouldn't have realized how fascinating go was.
The go language supports concurrency on a language level, unlike other languages, unlike when we used the thread library to create new threads and share data with thread-safe queue libraries.
Here are my introductory notes on learning.
First, parallel! = concurrency, the two are different, you can refer to: http://concur.rspace.googlecode.com/hg/talk/concur.html
Goroutines, channels, and deadlocks in the Go language
Goroutine
There is a concept called goroutine in the Go language, which is similar to the thread we know, but lighter.
In the following procedure, we serially perform two loop
functions:
Func loop () {for I: = 0; i <; i++ { fmt. Printf ("%d", i) }}func main () { loop () loop ()}
There is no doubt that the output will be like this:
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
Let's run a loop in a goroutine and we can use the keyword go
to define and start a goroutine:
Func Main () { go loop ()//start a goroutine loop ()}
This time the output has become:
0 1 2 3 4 5 6 7 8 9
But why did you just output a trip? Obviously our main line ran a trip, also opened a goroutine to run a trip ah.
It turned out that the main function had exited before Goroutine had time to run the loop.
The main function exits too fast, and we have to try to stop it from exiting prematurely, one way is to let main wait for a moment:
Func Main () { go loop () loop () time . Sleep (time. Second)//Pause for one second}
It did output two times this time, and the purpose was achieved.
But the use of the method of waiting is not good, if goroutine at the end of the time, told the main Line said "Hey, I want to run out!" "Just fine, that is, the so-called blocking the main line of the way, recalling our python inside waiting for all threads to complete the writing:
For thread in Threads: thread.join ()
Yes, we also need a similar join
thing to block the main line. That's the channel.
Channel
What is a channel? Simply put, it is something that goroutine communicates with each other. Similar to the pipelines on our UNIX (which can pass messages between processes), to send messages and receive messages between Goroutine. In fact, is doing goroutine between the memory sharing.
Use make
to create a channel:
var channel chan int = make (chan int)//or Channel: = make (chan int)
So how do you store messages and get messages to the channel? An example:
Func Main () { var messages chan string = Make (Chan string) go func (Message string) { messages <-message// Save Message } ("ping!") Fmt. PRINTLN (<-messages)//Fetch message}
By default, the channel's save and fetch messages are blocked (called unbuffered channels, but the concept of buffering is understood later, the problem of blocking first).
In other words, the unbuffered channel suspends the current goroutine when the message is taken and the message is stored, unless the other end is ready.
For example, the following main function and the Foo function:
var ch chan int = make (chan int) func foo () { ch <-0 //Add data to CH, if there is no other goroutine to fetch this data, then hang Foo until the main function takes 0 of this data Go}func main () { go foo () <-CH//Take data from CH, if the data is not in CH, hang the main line until the data is placed in the Foo function}
Since the channel can block the current goroutine, then back to the previous part of the "goroutine" encountered the problem of "How to let Goroutine tell the main line I finished" problem, use a channel to tell the main line:
var complete chan int = make (chan int) func loop () {for I: = 0; i <; i++ { fmt. Printf ("%d", i) } complete <-0//executed, send a message}func main () { go loop () <-complete//until the thread runs out and takes the message. Main block here}
If you don't use the channel to block the main line, the main line will run out prematurely and the loop will not be able to execute 、、、
In fact, the unbuffered channel will never store data, only responsible for the flow of data, why do you say so?
Data is taken from the unbuffered channel and must be streamed in before the current line is blocked
The data flows into the unbuffered channel, and if there is no other goroutine to take the data, then the front line blocks
So, you can test under, anyway, the size of the unbuffered channel we're testing is 0 ( len(channel)
)
If the channel is in the flow of data, we have to join the data, or the channel is dry, we have been to the non-data inflow of empty channels to fetch data? It will cause a deadlock.
Dead lock
An example of a deadlock:
Func Main () { ch: = make (chan int) <-CH//block main goroutine, Channel C is locked}
Execute this program and you will see a bug like the Go newspaper:
fatal error: all goroutines are asleep - deadlock!
What is a deadlock? The operating system has been told that all threads or processes are waiting for the release of resources. As in the program, there is only one goroutine, so when you add data to it or stored data, it will lock the dead-letter path, and block the current goroutine, that is, all the goroutine (in fact, a main line) are waiting for the opening of the channel ( No one took the data channel is not open, that is, deadlock.
I found the deadlock to be a very interesting topic, here are a few examples of deadlocks:
Only in a single goroutine operation of the unbuffered channel, must deadlock. For example, you only operate the channel in the main function:
Func Main () { ch: = make (chan int) CH <-1//1 into the channel, blocking the current line, no one takes the data channel will not open FMT. PRINTLN ("This line code wont run")//GO will report a deadlock before this row is executed}
The following is also an example of a deadlock:
var ch1 chan int = make (Chan int.) var ch2 chan int = make (chan int) func say (s string) { FMT. Println (s) ch1 <-<-CH2//CH1 wait for CH2 to flow out of data}func main () { go Say ("Hello") <-ch1 //block Main Line}
The main line and other ch1 in the data outflow, CH1 and other CH2 data outflow, but CH2 wait for data inflow, two goroutine are waiting, that is, deadlock.
In fact, in summary, why would deadlock? A deadlock occurs when there is no inflow or outflow in the non-buffered channel or no inflow. Or to understand that all the goroutine in the go boot must have data in one line and data in a line. So the following example must be deadlocked:
C, quit: = make (chan int), make (chan int) go func () { c <-1 //C channel data is not read by other goroutine, clogging current Goroutine quit & lt;-0//Quit always has no way to write Data} () <-quit//Quit waiting for data to be written
Careful analysis is due to: the main line waiting for the data outflow of the Quit channel, quit waiting for data to write, and Func is blocked by the C channel, all goroutine are waiting, so deadlock.
In a nutshell, there are two lines, and the data that flows into the C channel in the Func line does not flow out of the main line and is definitely deadlocked.
But is it true that all non-aligned channel access to data is deadlocked?
The following is a counter example:
Func Main () { c: = make (chan int) go func () { c <-1 } ()}
Program normal exit, very simple, not our summary does not work, or because a person is very embarrassing reason, main and did not wait for other goroutine, he ran out first, so no data into the C channel, a total of a goroutine, and did not occur blocking, So there is no deadlock error.
What about the solution to the deadlock?
The simplest, take the data not taken away, not put into the data, because no buffer channel can not carry data, then quickly take away!
Specifically, in the case of the deadlock example 3, you can avoid deadlocks as follows:
C, quit: = make (chan int), make (chan int) go func () { C <-1 quit <-0} () <-c//take C data! <-quit
Another workaround is to buffer the channel, that is, set C to have a buffer size for the data:
c: = make (chan int, 1)
In this case, C can cache one data. That is, to put a data, C will not suspend the current line, and then put one to suspend the current line until the first data is taken away by another goroutine, that is, only blocked at the capacity of a certain time, not up to capacity does not block.
This is very similar to the queue in our Python, Queue
isn't it?
Data entry and exit sequence for unbuffered channels
We already know that the unbuffered channel never stores data, and the incoming data must be streamed out.
Observe the following procedure:
var ch chan int = make (chan int) func foo (id int) {//id: This routine designator ch <-id}func Main () { //Open 5 routine
for I: = 0; I < 5; i++ { go foo (i) } //Remove data from channel for I: = 0; i < 5; i++ { fmt. Print (<-ch) }}
We opened 5 goroutine and then we took the data in turn. In fact, the entire implementation of the process of subdivision, 5 lines of data flow through the channel CH, main printing, and macro we see that the unbuffered channel data is first-come first-out, but no buffer channel does not store data, only responsible for the flow of data
Buffered Channel
Finally to this topic, in fact, the cache channel in English is more expressive: buffered channel.
Buffering the word means that the buffer channel can not only flow data, but also cache data. It is a capacity, the deposit of a data, you can first put in the letter tunnel, do not have to block the current line and wait for the data to take away.
When the buffer channel reaches full state, it will show blocking, because it can no longer carry more data, "You have to take the data to be able to flow into the data."
When declaring a channel, we give make a second parameter to indicate its capacity (default is 0, i.e. no buffering):
var ch chan int = make (chan int, 2)//write 2 elements will not block the current goroutine, the number of stores up to 2 is blocked
As the following example, the buffered channel CH can flow without buffering to 3 elements:
Func Main () { ch: = Make (chan int, 3) ch <-1 ch <-2 ch <-3}
If you try to stream data again, Channel CH will block the main line and report a deadlock.
In other words, the buffer channel is locked at full capacity.
In fact, the buffer channel is FIFO, we can look at the buffer channel as a thread-safe queue:
Func Main () { ch: = Make (chan int, 3) ch <-1 ch <-2 ch <-3 fmt. PRINTLN (<-CH)//1 FMT. PRINTLN (<-CH)//2 FMT. PRINTLN (<-CH)//3}
Channel data read and channel shutdown
You may find that the above code is too much trouble to read the Channel One by one, and the go language allows us to use it range
to read the word:
Func Main () { ch: = Make (chan int, 3) ch <-1 ch <-2 ch <-3 for V: = Range ch { FMT. Println (v) }}
If you execute the above code, the deadlock error is reported because range does not end the read until the channel is closed. That is, if the buffer channel dries up, then range will block the current goroutine, so deadlock.
So, we try to avoid this situation, it is easier to think of reading the channel is empty when the end of the read:
CH: = Make (chan int, 3) ch <-1ch <-2ch <-3for V: = Range ch { FMT. Println (v) If Len (CH) <= 0 {//If the existing data volume is 0, jump out of loop break }}
The above method can be output normally, but note that the method of checking the channel size cannot be used to fetch all the data when the channel access is occurring, this example is because we only stored the data in CH, now one by one outward, the channel size is decremented.
Another way is to explicitly close the channel:
CH: = Make (chan int, 3) ch <-1ch <-2ch <-3//explicitly close channel close (CH) for V: = range ch { FMT. Println (v)}
The closed channel disables data inflow and is read-only. We can still pull the data out of the closed channel, but we can't write the data anymore.
Scenarios that await multiple gorountine
Well, let's go back to the initial problem, using the channel to jam the main line, waiting for all the goroutine to run out.
This is a model, open a lot of small goroutine, they each run their own, and finally ran to the main Line report.
We discuss the following 2 versions of the scenario:
Block mainline with only a single unbuffered channel
Use buffered channels with a capacity of goroutines
For Scenario 1, the code for the example would probably look like this:
var quit Chan Int//Open only one channel func foo (id int) { fmt. Println (ID) quit <-0//OK, Finished}func main () { count: = + quit = make (chan int)//unbuffered for I: = 0; I < count; i++ { go foo (i) } for I: = 0; i < count; i++ { <-quit }}
For scenario 2, switch the channel to buffer 1000:
Quit = make (chan int, count)//Capacity 1000
In fact, the difference is only a buffer, a non-buffering.
For this scenario, both can accomplish the task, both are possible.
The unbuffered channel is a batch of data one by one "flow in and out"
The buffered channel is a single storage, then flows out together