Concurrent programming of Go language learning notes

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

Compile from effective_go.html#concurrency (translation error, please correct me)

1. Shared memory via communication (Share by communicating):

Do not communicate by sharing memory; Instead, share memory by communicating.

Do not communicate through memory sharing; You should share memory through communication. Using a channel (channels) to control variable access can make it easier to write clear, correct programs.

2. Goroutines:

Why create a new word for goroutine? The reason is that existing terminology, such as threading, progression, process, and so on, can not accurately express the meaning of its expression (the translator also suggests not to translate it into Chinese, because there is no word in Chinese can accurately express its connotation). A goroutine is a function that executes concurrently with other goroutine in the same address space, which is a bit of a detour, but shows two meanings: a goroutine is a function; multiple goroutine execute concurrently in the same address space.

Goroutine is lightweight and consumes much less memory than the method of allocating stack space directly. It has a small starting stack (stack) that increases memory usage by allocating (and freeing) heap space. Goroutine can be reused by multiple OS threads, so if a goroutine is blocked (such as waiting for I/O), others can continue to run. This design hides the complexities of thread creation and thread management. You can call this function in a new goroutine by using the keyword go before the function or method. When the call is complete, the goroutine exits. (The effect is similar to the & in the Unix shell where the command runs behind the command).

Go list. Sort () // run list concurrently. Sort, without waiting for its end.

Anonymous functions are also useful in goroutine calls.

func announce (message string , delay Int64) {
go func () {
Time . Sleep (Delay)
FMT. PRINTLN (Message)
  }()  // note the parentheses here and you must call the function.
}

In the Go language, an anonymous function is a closure (closure), and its implementation ensures that the variable lifetime referenced by the function is as long as the lifetime of the function.

This example is not practical because the function does not emit a signal at the end of the run. So we need channels to come out.

3. Channel (Channels)
Like map, a channel is a reference type, and make allocates memory. If you provide an optional integer argument when you call make, the channel is allocated a buffer of the appropriate size. The buffer size defaults to 0, which corresponds to a unbuffered channel or a synchronous channel.

CI: = Make (Chan int )  // unbuffered integer Channel
CJ: = Make (Chan int ,  0 )  // unbuffered integer Channel
CS: = Make (Chan * OS. File, - )  // buffered file pointer channel

The channel combines communication (value Exchange) with synchronization to ensure that two calculation processes (Goroutine) are in a known state.

Take the previous background parallel sort as an example. The channel can be used to let the running Goroutine wait for the sort to complete.

C: = Make (Chan int )  // Allocate a channel.
// start the sort in goroutine, signaling on the channel when sorting is complete
go func () {
List. Sort ()
C <-   1   // send a signal, the value is how much does not matter.
}()
Dosomethingforawhile ()
<- C // wait for the sort to complete and discard the values that were sent.

The recipient (receivers) is blocked until the data is received. If the channel is non-buffered, the sender (sender) is also blocked until the recipient receives the data. If the channel has a buffer, the sender is blocked only before the data is filled in the buffer, and if the buffer is full, it means that the sender waits for a value to be taken by a recipient.

Buffered channels can be used like beacons, for example to limit throughput. In the following example, the incoming request is passed to Handle,handle to send a value to the channel, then processing the request, and finally receiving a value from the channel. The size of the channel buffers limits the number of concurrent calls to process.

var sem = Make (Chan int , maxoutstanding)
Func Handle (R * Request) {
SEM <-   1   // Wait queue buffer is not full
process (R) // processing requests can take a long time.
   <- sem // request processing complete, ready to process next request
}
Func Serve (Queue Chan * Request) {
for {
Req: =   <- queue
Go handle (req) // do not wait for handle to complete
  }
}

The same functionality can be achieved by starting a fixed number of handle goroutines, which read requests from the request channel. The number of goroutines limits the number of concurrent calls to process. The serve function also receives an exit signal from a channel, and after the Goroutines is started, it is blocked until the exit signal is received:

func handle (queue Chan * Request) {
for r: = Range Queue {
Process (R)
}
}

Func Serve (Clientrequests Chan * clientrequests, quit Chan BOOL ) {
// initiating request processing
   for I: =   0 ; I < maxoutstanding; I ++  {
Go handle (clientrequests)
}
<- quit // wait for exit signal
}

4. Channel passing channel (Channels of Channels)

One of the most important features of Go is that the channel is one of the most important features of go: The channel can be allocated memory and passed like other types of values. This feature is commonly used to implement secure and parallel de-multiplexing (demultiplexing).

In the previous example, handle was an idealized function to process the request, but we did not define the exact type of request it could handle. If the type includes a channel, each client can provide its own way to answer

type Request struct  {
args [] int
f func ([] int )  int
Resultchan Chan int
}

The client provides a function, the parameters of the function, and a channel that the request object uses to receive the answer.

func sum (a [] int ) (S int ) {
for _, V: = Range A {
s += V
}
return
}

Request: =   & request{[] int { 3 ,  4 ,  5 }, Sum, make (Chan int )}
// Send Request
clientrequests <- Request
// wait for a response.
FMT. Printf ( " Answer:%d\n " ,  <- Request.resultchan)

On the server side, the function that processes the request is

func handle (queue Chan * Request) {
for Req: = Range Queue {
Req.resultchan <- req.f (Req.args)
  }
}

Obviously there is much more to be done to make this example more practical, but it is a framework for speed limiting, parallel, non-blocking RPC systems, and it does not see the use of mutexes (mutexes).

5. Parallel (parallelization)

Another application of these ideas is the use of multi-core CPUs for parallel computing. If the calculation process can be divided into multiple fragments, it can be parallelized in such a way that the signal is sent over the channel after each fragment is completed.

Suppose we have a time-consuming vector operation, and the value of the operation for each data item is independent, as the following ideal example shows:

type Vector []float64

// apply action to V[i], v[i+1] ... v[n-1] .
func (v Vector) dosome (i, N int , U Vector, C chan int ) {
for ; I < N; I ++  {
V[i] += U.op (V[i])
  }
C <-   1   // Send complete Signal
}

We start a separate compute fragment for each CPU in a loop that can be executed in any order, and the order of execution is irrelevant here. After starting all the goroutines, we only need to extract all the completion signals from the channel.

Const ncpu =   4   // number of CPU cores

func (v vector) doall (U vector) {
C: = Make (Chan int , NCPU) // Buffering Optional but sensible.
   for I: =   0 ; I < ncpu; I ++  {
Go v.dosome (i * Len (v) / ncpu, (i + 1 ) * Len (v) / ncpu, U, c)
  }
// Remove all signals from the channel
   for I: =   0 ; I < ncpu; I ++  {
<- C // wait for a task to complete
  }
// All tasks are now complete.
}

The current implementation of the Go compiler GC (6g, etc.) does not, by default, parallelize this piece of code. For user-level processes, it uses only single cores. Any number of Goroutines can be blocked in system calls, but only one goroutine can execute user-level code at any time by default. If you need parallel computing for multicore CPUs, you must inform the runtime that the number of Goroutines executed in parallel is gomaxprocs. There are two ways to set up the Gomaxprocs, one is to set the environment variable GOMAXPROCS to the CPU core number, and the other way is to import the runtime package and call runtime. Gomaxprocs (NPCU).

(Author: Agate River.) Respect the work of others, reprint please indicate the author or source)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.