Go programming language (iii)

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

This article was translated from Rob Pike's Go language ppt tutorial – "The Go Programming Language Part3 (Updated June 2011)". Since the latest update to the tutorial was released earlier than the Go 1 release, some of the content in this PPT is slightly different from the Go 1 language specification, where I will annotate it accordingly.

Part III outline

    • Concurrency and communication
      • Goroutines
      • Channels (channel)
      • Concurrency Related Topics

Concurrency and communication: goroutines

Goroutines

Terms:

There are a lot of terms for "running things concurrently"-processes, threads, coroutine, POSIX threads, NPTL threads, lightweight processes ..., but these things are more or less different. And the concurrency in Go is different from what.

So we're introducing a new term: goroutine.

Defined

A goroutine is a go function or method that runs in the same address space as the other goroutines. A running program consists of one or more goroutine.

It differs from thread, association, process, and so on. It is a goroutine.

Note: Concurrency and parallelism are different concepts. If you don't know the difference, check out the relevant information.

There are a number of problems with concurrency. We will refer to it later. Now assume that it works as well as it claims to be.

Start a Goroutine

Call a function or method, and then say go:

Func IsReady (what string, minutes Int64) {
Time. Sleep (minutes * 60*1e9)//Unit is nanosecs.
Fmt. Println (What, "was ready")
}
Go IsReady ("Tea", 6)
Go IsReady ("Coffee", 2)
Fmt. Println ("I ' m waiting ...")

Print:

I ' m waiting ... Immediately
Coffee is ready (2 minutes later)
Tea is ready (6 minutes later)

A few simple facts

The use of goroutine is very low.

Goroutine exits when returned from the outermost function, or at the end of execution.

Goroutines can be executed concurrently on different CPUs, sharing memory.

You don't have to worry about stack size.

Stack

In Gccgo, at least the current goroutines is pthreads. In 6g, Goroutines uses thread-based multiplexing technology, so they are cheaper.

Whichever implementation is above, the stacks are small (a few kilobytes) and can grow as needed. Therefore, Goroutines uses very little memory. You can create many goroutines, and they can also have large stacks on the fly.

Programmers do not need to consider stack size related topics. In go, this consideration should not even occur.

Scheduling

Goroutine multiplexing System threads. When a goroutine executes a blocking system call, the other goroutine do not block.

Plan for subsequent implementations of CPU-bound goroutines, but currently with 6g if you want user-level parallelism, you must set the environment variable Gomaxprocs or call Runtime.gomaxprocs (n).

Gomaxprocs tells the runtime Scheduler how many user space Goroutine are going to be executed simultaneously, ideally on different CPU cores.

*gccgo always assigns a single thread execution for each goroutine.

Concurrency and communication: Channels

The channel in Go

Unless two goroutine can communicate, they cannot collaborate.

Go has a type called Channel, which provides communication and synchronization capabilities.

The go also offers some special channel-based control structures that make it easier to write concurrent programs.

Channel type

The simplest form of the type:
Chan ElementType

With this type of value, you can send and receive elements of the ElementType type.

The channel is a reference type, which means that if you assign one Chan variable to another, the two variables access the same channel. Again, this also means that you can allocate a channel with make:

var c = make (chan int)

Communication operator: <-

Arrows indicate the flow of data.

As a two-dollar operator, <-sends the value from the right side to the channel on the left:

c: = make (chan int)
C <-1//Send to C 1

As a prefix unary operator, <-receives data from a channel:

v = <-c//Receive data from C, assign to V
<-C//Receive data, discard
I: = <-c//Receive value, used to initialize I

Semantic

By default, communication is synchronous. (We will discuss asynchronous communication later.) This means that:

1) a The send operation on a channel is blocked until a receiver is ready on the channel.
2) The receive operation on a channel is blocked until a sender is ready on the channel.

So communication is a form of synchronization: Two of the goroutine that Exchange data through the channel are synchronized at the moment of communication.

Let's pump some data.

Func pump (ch Chan int) {
For I: = 0;; i++ {ch <-i}
}
CH1: = make (chan int)
Go pump (CH1)//pump hangs; We run
Fmt. PRINTLN (<-CH1)//Print 0

Now we start a loop receiver:

Func suck (ch chan int) {
for {fmt. Println (<-ch)}
}
Go Suck (CH1)//large number of numbers appear

You can still sneak in and grab a value:

Fmt. PRINTLN (<-CH1)//output: 3141159

function to return Channel

In the previous example, pump like a generator, spewing out values. But a lot of work has been done on the allocation of channel. Let's package it into a function that returns the channel:

Func pump () Chan int {
CH: = make (chan int)
Go func () {
For I: = 0;; i++ {ch <-i}
}()
return CH
}
Stream: = Pump ()
Fmt. PRINTLN (<-stream)//Print 0

The function of returning channel is an important idiomatic method in go.

It's all over the function of returning the channel.

I will not repeat the famous examples that you can find elsewhere. Here are some things to look at:

1) Prime sieve: In the language specification as well as in the tutorials.

2) Doug McIlroy's power series paper: Http://plan9.bell-labs.com/who/rsc/thread/squint.pdf

A go version of this program is in the test suite: Http://golang.org/test/chan/powser1.go

Range and Channel

The range clause of the FOR Loop receives the channel as an operand, in which case the for loop iterates over the value received from the channel. Let's rewrite the pump function; here is the suck rewrite, let it also start a goroutine:

Func suck (ch chan int) {
Go func () {
For V: = Range ch {fmt. Println (v)}
}()
}
Suck (pump ())//now no longer blocked

Close a channel

How does range know when the data transfer on the channel is over? The sender invokes a built-in function close:

Close (CH)

The recipient uses "comma OK" to test whether the sender has closed the channel:

Val, ok:= <-Ch

When the result is (value, true), the data is still there; once the channel is closed and the data stream is dry, the result will be (zero, false).

Use range on a channel

Use range on a channel, such as:

For value: = Range <-ch {
Use (value)
}

Equivalent to:

for {
Value, OK: = <-ch
If!ok {
Break
}
Use (value)
}

Close

Key points:

Only the sender can call close.
Only the recipient can ask if the channel is closed.
Ask (avoid competition) only when you get the value

Call Close only if it is necessary to notify the recipient that no more data is available.

In most cases, close is not required, and it is not comparable to closing a file.

In any case, the channel can be garbage collected.

Directionality of Channel

The simplest form of a channel variable is a non-buffered (synchronous) value that can be used for sending and receiving.

A channel type can be specified as only send or receive:

var recvonly <-chan int
var sendonly chan<-int

Directionality of Channel (2)

All channel creation is bidirectional, but we can assign them to channel variables with directionality. From a type-safety standpoint, it is useful for instances within a function:

Func sink (ch <-chan int) {
for {<-ch}
}
Func Source (ch chan<-int) {
for {ch <-1}
}
c: = make (chan int)//bidirectional
Go Source (c)
Go Sink (c)

Synchronized Channel

The synchronized channel is non-buffered. The send action is not completed until a recipient receives the value.

c: = make (chan int)
Go func () {
Time. Sleep (60*1E9)
x: = <-c
Fmt. Println ("Received", X)
}()

Fmt. PRINTLN ("Sending", 10)
C <-10
Fmt. PRINTLN ("Sent", 10)

Output:

Sending 10 (occurs immediately)
Sent 10 (60 seconds later, these two lines appear)
Received 10

The asynchronous channel

By telling the number of elements in the make buffer, we can create a buffered, asynchronous channel.

c: = make (chan int, 50)
Go func () {
Time. Sleep (60*1E9)
x: = <-c
Fmt. Println ("Received", X)
}()
Fmt. PRINTLN ("Sending", 10)
C <-10
Fmt. PRINTLN ("Sent", 10)

Output:

Sending 10 (happens immediately)
Sent 10 (now)
Received 10 (60 seconds later)

Buffering is not part of a type

Note that the size of the buffer is not even part of the channel type itself, just part of the value. So the following code is dangerous, but legitimate:

BUF = make (chan int, 1)
Unbuf = make (chan int)
BUF = Unbuf
Unbuf = buf

Buffering is a property of a value, not a type.

Select

Select is a control structure in go, similar to the switch statement used for communication. Each case must be a communication operation, either send or receive.

CI, cs: = Make (chan int), make (Chan string)
Select {
Case V: = <-ci:
Fmt. Printf ("Received%d from ci\n", V)
Case V: = <-cs:
Fmt. Printf ("received%s from cs\n", V)
}

Select executes a running case randomly. If there is no case to run, it will block until there is one to run. A default clause should always be operational.

Select semantics

Quick Overview:

-Each case must be a communication (may be: =)
-All channel expressions will be evaluated
-All expressions sent will be evaluated
-If any one of the communication can be done, it executes; others are ignored.
-If more than one case can be run, select randomly and fairly chooses an execution. Others will not be executed.
otherwise
– if there is a default clause, the statement is executed.
– if there is no default sentence, select blocks until a communication can be run, and go does not re-evaluate the channel or value.

Random bit generator

An example of childish but illustrative nature:

c: = make (chan int)
Go func () {
for {
Fmt. Println (<-C)
}
}()

for {
Select {
Case C <-0://No statement, no Fallthrough
Case C <-1:
}
}

Testing for communication

Can a communication be carried out without blocking? A select with default words can tell us:

Select {
Case V: = <-ch:
Fmt. Println ("received", V)
Default
Fmt. PRINTLN ("Ch not ready for receive")
}

If no other case can be run, the default clause will be executed, so this is a customary method for nonblocking reception, and non-blocking sends can obviously do the same.

Timeout

Can a communication be completed successfully within a given time? The time package contains the after function:

Func after (ns Int64) <-chan Int64

After a specified time period, it passes a value (the current time) to the returned channel.

Use it in select to implement a timeout:

Select {
Case V: = <-ch:
Fmt. Println ("received", V)
Case <-time. After (30*1E9):
Fmt. Println ("Timed out after" seconds ")
}

Multiplexing (multiplexing)

The channel is the native value, which means that they can also be sent through the channel. This property makes it easy to write a service-class multiplexer because the client can provide a channel for replying to a reply when the request is submitted.

Chanofchans: = Make (chan Chan int)

Or more typically, such as:

Type Reply struct {...}
Type Request struct {
Arg1, Arg2 SomeType
Replyc Chan *reply
}

Multiplexing servers

Type request struct {
A, B int
REPLYC Chan int
}

Type Binop func (A, b int) int
Func Run (op binop, req *request) {
Req.replyc <-op (req.a, req.b)
}

Func Server (OP binop, service <-chan *request) {
for {
Req: = <-service//request arrives here
Go Run (OP, req)//Unequal op
}
}

Start the server

Use the "Return channel function" idiom to create a channel for a new server:

Func startserver (op binop) chan<-*request {
Service: = Make (chan *request)
Go Server (OP, req)
Return service
}

Adderchan: = StartServer (
Func (A, b int) int {return a + B}
)

Client

There is an example in the tutorial that is more detailed, but here is a variant:

Func (R *request) string () string {
Return to FMT. Sprintf ("%d+%d=%d",
R.A, R.b, <-R.REPLYC)
}
REQ1: = &request{7, 8, make (chan int)}
REQ2: = &request{17, +, make (chan int)}

The requests are ready to send them:

Adderchan <-req1
Adderchan <-REQ2

Results can be obtained in any order; R.REPLYC multi-channel decomposition:

Fmt. Println (REQ2, req1)

Stop

In the case of multiplexing, the service will run forever. To stop it cleanly, a channel can be used to send a signal. The following server has the same functionality, but one more quit channel:

Func Server (OP binop, service <-chan *request,
Quit <-chan bool) {
for {
Select {
Case REQ: = <-service:
Go Run (OP, req)//don ' t wait for it
Case <-quit:
Return
}
}
}

Start the server

The rest of the code is similar, just a few more channel:

Func startserver (OP binop) (Service chan<-*request,
Quit chan<-bool) {
Service = Make (chan *request)
Quit = Make (chan bool)
Go Server (OP, service, quit)
Return service, quit
}

Adderchan, Quitchan: = StartServer (
Func (A, b int) int {return a + B}
)

STOP: Client

Clients are affected only when they are ready to stop the server:

REQ1: = &request{7, 8, make (chan int)}
REQ2: = &request{17, +, make (chan int)}
Adderchan <-req1
Adderchan <-REQ2
Fmt. Println (REQ2, req1)

When all is done, send a signal to the server and let it exit:

Quitchan <-True

Chain

Package Main
Import ("Flag"; "FMT")
var ngoroutine = flag. Int ("n", 100000, "how many")
Func f (left, right Chan int) {left <-1 + <-right}
Func Main () {
Flag. Parse ()
Leftmost: = make (chan int)
var left, right chan int = nil, leftmost

For I: = 0; i < *ngoroutine; i++ {
Left, right = right, make (chan int)
Go f (left, right)
}

Right <-0//bang!

x: = <-leftmost//wait for completion
Fmt. PRINTLN (x)//100000
}

Example: Channel as Cache

var freeList = make (chan *buffer, 100)
var Serverchan = make (chan *buffer)

Func server () {
    for {
        b: = <-serverchan//wait for work< br>        Process (b)//processing requests in cache
        Select {
            Case freeList <-B://If there is space, reuse the cache
    & nbsp;       Default:            //Otherwise, discard it
& nbsp;      }
   }
}

Func Client () {
for {
var b *buffer
Select {
Case B = <-freelist://If ready, grab a
Default:b = new (Buffer)//Otherwise, assign a
}
Load (b)//read the next request into B
Serverchan <-b//sends the request to the server.
}
}

Concurrent

Concurrency Related Topics

Many concurrency aspects, of course, go has been trying to do their best. such as channel sending and receiving are atomic. The SELECT statement is also well defined and implemented.

But Goroutine runs in shared memory, the communication network may deadlock, the multithreaded debugger sucks, and so on.

What do you do next?

Go gives you the original

Do not program in a way that you use C or C + + or even java.

The channel gives you the ability to sync and communicate, and makes them powerful, but it's also easy to know if you can use them well.

The rules are:

Do not communicate through shared memory, instead, by communicating shared memory.

Unique communication behavior to ensure the synchronization!

Model

For example, use a channel to send data to a full-service goroutine. If only one goroutine at a time has a pointer to the data, there is no concurrency.

This is the server-side programming model we highly recommend, at least for the old "one thread per client" generalization. It has been in use since the 1980s and it works very well.

Memory model

The annoying details about synchronizing and sharing memory are in:

Http://golang.org/doc/go_mem.html

But if you follow our approach, you seldom need to understand the content.

, Bigwhite. All rights reserved.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.