In the Go language, the correct use of concurrency

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

Glyph Lefkowitz recently wrote an introductory article in which he elaborated on some of the challenges of developing high-concurrency software, and if you develop software but do not read this question, then I suggest you read one. This is a very good article, modern software engineering should have a wealth of wisdom.

Extracted from multiple tidbits, but if I ventured to summarize the main points of view, the idea is that the combination of preemptive multitasking and general shared state leads to the complexity of the software development process that is not manageable, and developers may prefer to keep some of their sanity to avoid this kind of non-manageable complexity. Preemptive scheduling is good for real parallel tasks, but when mutable states are shared through multiple concurrent threads, explicit multitasking is more pleasing.

Despite the collaborative multitasking, your code can still be complex, and it just has the opportunity to maintain a manageable amount of complexity. When control transfer is clear a code reader has at least some visible indications that things may be out of track. There is no clear mark that each new stage is a potential mine: "If this operation is not atomic, what happens at the end?" "Then the space between each command becomes an endless space black hole, and the awful heisenbugs appears

Over the past year, although work on Heka (a high-performance data, log and indicator processing engine) has mostly been developed using the GO language. One of the highlights of go is that the language itself has some very useful concurrency primitives. But what about the concurrency of go, which needs to be observed by encouraging code footage that supports local reasoning.

Not all the facts are good. All Goroutine access the same shared memory space, the state defaults to variable, but the Go Scheduler does not guarantee the accuracy of the context selection process. In a single-core setup, the run time of go goes to the "implicit collaborative work" category, and the list of asynchronous program models frequently mentioned in glyph is selected in 4. When Goroutine is able to run in parallel in multi-core systems, the world is hard to find.

Go can't protect you, but it doesn't mean you can't take steps to protect yourself. By using some of the primitives provided by go in the process of writing code, you can minimize the abnormal behavior generated by the associated preemption schedule. Take a look at the following glyph example of the Go interface in the "account Translation" code snippet (ignoring which floating-point numbers are not easy to end up storing fixed-point decimals)

       } 
        log. Printf ("%s has sufficient funds", payer)  
        payee. Deposit (amount)  
        log. Printf ("%s received payment", payee)  
        payer. Withdraw (amount)  
        log. Printf ("%s made payment", payer)  
        server. Updatebalances (payer, payee)//Assume this are magic and always works. 
        return Nil&nbs P
   }

This is obviously unsafe, if called from multiple goroutine, because they may get the same result from the balanced schedule concurrently, and then request more of the balanced variables that have been canceled. It is best that the dangerous part of the code is not executed by multiple goroutine. This feature is implemented in this way:

Type transfer struct {
Payer *account
Payee *account
Amount Float64
}

var Xferchan = make (chan *transfer)
var Errchan = make (chan error)
Func init () {
Go Transferloop ()
}

Func Transferloop () {
For Xfer: = Range Xferchan {
If Xfer.payer.Balance < Xfer.amount {
Errchan <-errors. New ("Insufficient funds")
Continue
}
Log. Printf ("%s has sufficient funds", Xfer.payer)
Xfer.payee.Deposit (Xfer.amount)
Log. Printf ("%s received payment", Xfer.payee)
Xfer.payer.Withdraw (Xfer.amount)
Log. Printf ("%s made payment", Xfer.payer)
Errchan <-Nil
}
}

Func Transfer (amount float64, payer, payee *account,
Server Someservertype) Error {

Xfer: = &transfer{
Payer:payer,
Payee:payee,
Amount:amount,
}

Xferchan <-Xfer
ERR: = <-errchan
If Err = = Nil {
Server. Updatebalances (payer, payee)//Still magic.
}
return err
}

There's more code here, but we eliminate concurrency problems by implementing a trivial event loop. When the code is first executed, it activates a goroutine run loop. A forwarding request is passed into a newly created channel for this purpose. The result is returned to the outside of the loop via an erroneous channel. Because the channels are not buffered, they are locked, and through the transfer function regardless of how many concurrent forwarding requests are entered, they are continuously serviced through a single run event loop.

The code above looks a little awkward, maybe. A mutex (mutex) may be a better choice for such a simple scenario, but what I'm trying to prove is that you can apply an isolated state operation to a go routine. Even slightly awkward, it's good enough for most needs, and it works, even using the simplest account structure:

Type account struct {
Balance Float64
}

Func (a *account) Balance () float64 {
Return a.balance
}

Func (a *account) Deposit (amount float64) {
Log. Printf ("Depositing:%f", amount)
A.balance + = Amount
}

Func (a *account) withdraw (amount float64) {
Log. Printf ("Withdrawing:%f", amount)
A.balance-= Amount
}

But such a clumsy account implementation would seem naïve. It may be more effective to provide some protection for the account structure itself by not letting any recall operations greater than the current balance. What happens if we turn the recall function into something like this?:

            return 
      & nbsp } 
        log. Printf ("Withdrawing:%f", amount)  
        a.balance-= amount 
   }

Unfortunately, this code suffers the same problem as our original Transfer implementation. Concurrent execution or unfortunate context switching means that we may end up with a negative balance. Fortunately, the idea of an internal event loop is also very good, even better, because the event loop Goroutine can be well coupled with each individual account structure instance. Here's an example to illustrate this point:

Type account struct {
Balance Float64
Deltachan Chan Float64
Balancechan Chan Float64
Errchan Chan Error
}

Func (a *account) Balance () float64 {
Return <-a.balancechan
}

Func (a *account) Deposit (amount float64) error {
A.deltachan <-Amount
Return <-a.errchan
}

       } 
        a.balance = Newbala NCE&NBSP
        return nil 
   }

This API is slightly different, and the Deposit and withdraw methods now return errors. Instead of processing their requests directly, they put the adjustment of the account balance into Deltachan and access the Deltachan in the event loop when the Run method runs. Similarly, the Balance method continuously requests data through a blocking loop until it receives a value through Balancechan.

The key point to note is that the above code, all of the structure of internal data is worth direct access and modification is the event loop triggered by the *within* code to complete. If public API calls are performing well and interacting with the data using only the given channels, then regardless of the number of concurrent calls to public methods, we know that only one of them will be processed at any given time. Our time loop code is much easier to infer.

The core of the pattern is the design of Hek E. When Heka starts, it reads the configuration file and launches each plug-in in its own go routine. With the clock signal, shutdown notification, and other control signals, the data is fed into the plug-in via the channel. This encourages plug-in authors to implement plug-in functionality using a schema of event loop types like the one described above.

Again, go won't protect you. It is entirely possible to write a Heka plug-in (or any architecture) that is loosely coupled with its internal data management and subject-matter conditions. But there are some small places to be aware of, and the free application of Go controversy detectors, which you can write code whose behavior can be predicted, even in the façade code of preemptive scheduling.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.