Concurrency features and cases for Go 1.5

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

The most useful feature of the go language is the concurrency as the first supported language, the use of the goroutine, very easy to implement code concurrency, which makes go become an important choice for network class applications, this paper, taking bank transfer as an example, explains how to use the process to implement concurrency in the new version of Go 1.5. This article also points out that all the processes between go 1.5 are only running in a single process, do not support multi-core CPU parallel computing, and 1.5 later to support multicore.

Golang Security and Concurrency

The following code is an implementation of a co-process:

func hello() {    println("Hello!")}// ---func main() {    make(chanstring)    go hello()    gofunc(chanstring) {        println(<-testchan)    }(testchan)    "world"}

The process is using the keyword "Go", which can be regarded as a standalone function or an anonymous function, which is non-clogging, because the process will be flexible scheduling. We also use channel channels, allowing the covariance to transfer variables to each other, like queue pipelines, which easily solves the problem of communication between the processes, and when something is sent to the channel, the channel blocks until the read operation occurs, so there is no risk of losing the message.

In previous versions of Go 1.5, all the threads were running in a single process (like node. js) By default, which meant concurrency but not parallelism, because only one of the threads was running at a time, and the internal scheduler dispatched them to ensure that all the threads were running.

The following code simulates a single-process approach:

make(chanintfunc() {    testchan <- 1func() {    for {}    testchan <- 1}go finite_func()go infinite_func()println(<-testchan)

The first process returns the 1 value in the channel immediately, and the second infinite loop, because the single-threaded cause causes the program to hang until the two threads are completed, and if two processes are used, the program exits immediately when the first process returns results.

The above shows the basic knowledge of the process, below we look at the competition conditions, using simple online bank transfer case, each time a request will be sent from a account to transfer money to B account, the bank needs to transfer cash and output new account balance:

Type User struct {Cash int}func (U *user) sendcash (to *user, amount int) bool {if u. Cash< amount {return false}/ * Delay to demonstrate the race condition * /Time. Sleep( -* Time. Millisecond) u. Cash= u. Cash-Amount to. Cash= To. Cash+ Amount return True}func main () {me: = User{cash: -} You: = User{cash: -} http. Handlefunc("/", Func (w http. Responsewriter, R *http. Request) {Me. Sendcash(&you, -) FMT. fprintf(W,"I have $%d\n", ME. Cash) FMT. fprintf(W,"You have $%d\n", you. Cash) FMT. fprintf(W,"Total transferred: $%d\n", (You. Cash- -))}) HTTP. Listenandserve(": 8080", nil)}

This is a generic go Web application that defines the user data structure, Sendcash is a service that transfers between two users, using the Net/http package, we create a simple HTTP server, and then route the request to a Sendcash method that transfers 50 dollars, Under normal operation, the code will run as we expected, each transfer of $50, once a user's account balance reached $0, you can no longer transfer money, because there is no money, but if we send a lot of requests quickly, the program will continue to transfer a lot of money, resulting in negative account balance.

This is a competition situation often talked about in textbooks race condition, in this code, the checking of the balance of the account is separated from the withdraw operation from the account, we imagine that if a request has just completed the account balance check, but has not yet taken the money, that is, did not reduce the account balance value While another request thread also checks the balance of the account, finding that the account balance has not been left at 0 (resulting in two requests taking the money together, resulting in a negative account balance), which is a typical "check-then-act" competition situation. This is a common concurrency bug.

So how do we fix it? We certainly cannot remove the check operation, but to ensure that there is no other action between the check and the two actions of the money, the other language is to use the lock, when the account is updated, lock prohibit the simultaneous operation of other threads, ensure that there is only one process operation at a time, that is, repel lock mutex.

You can also use the go language to implement lock operations, as follows:

Type User struct {Cash Int}var transferlock *sync. MutexFunc (U *user) sendcash (to *user, amount int) bool {Transferlock. Lock()/ * Defer runs this function whenever Sendcash exits * /Defer Transferlock. Unlock() if u. Cash< amount {return false}/ * Delay to demonstrate the race condition * /Time. Sleep( -* Time. Millisecond) u. Cash= u. Cash-Amount to. Cash= To. Cash+ Amount return True}func main () {Transferlock = &sync. Mutex{} Me: = User{cash: -} You: = User{cash: -} http. Handlefunc("/", Func (w http. Responsewriter, R *http. Request) {Me. Sendcash(&you, -) FMT. fprintf(W,"I have $%d\n", ME. Cash) FMT. fprintf(W,"You have $%d\n", you. Cash) FMT. fprintf(W,"Total transferred: $%d\n", (You. Cash- -))}) HTTP. Listenandserve(": 8080", nil)}

But the problem of shrinking is obviously reduced concurrency, is the biggest enemy of concurrent design, we recommend the use of channel channels in go, we can use the event loop activity Loop mechanism for more flexibility to achieve concurrency, we entrust a background daemon listening channel, when there is data in the channel, the transfer operation immediately, Because the process is sequentially reading the data in the channel, which subtly avoids the competition, it is not necessary to use any state variables to prevent concurrent contention.

typeUserstruct{Cashint}typeTransferstruct{Sender *user Recipient *user Amountint}funcSendcashhandler (TransferchanChanTransfer) {varVal Transfer for{val = <-transferchan val. Sender.sendcash (Val. Recipient, Val. Amount)}}/ * Sendcash is the same * /funcMain () {me: = User{cash: -} You: = User{cash: -} Transferchan: = Make(ChanTransfer)GoSendcashhandler (Transferchan) http. Handlefunc ("/",func(W http. Responsewriter, R *http. Request) {transfer: = Transfer{sender: &me, Recipient: &you, Amount: -} Transferchan <-transfer fmt. fprintf (W,"I have $%d\n", Me. Cash) fmt. fprintf (W,"You have $%d\n", you. Cash) fmt. fprintf (W,"Total transferred: $%d\n", (you. Cash- -)}) http. Listenandserve (": 8080",Nil)}

The above code creates a more reliable system that avoids concurrent competition, but we bring another security issue: DoS (Denial of service denial), if our transfer operation slows down, Then the incoming request needs to wait for the transfer operation of the process to read the new data from the channel, but the thread is busy taking care of the transfer operation, does not have the time to read the new data in the channel, this situation will lead to the system vulnerable to Dos attacks, the outside world as long as a large number of requests can make the system stop responding.

Some basic mechanisms such as the buffered channel can handle this situation, but the buffered channel has a memory limit and is not enough to hold all the requested data, and the optimization solution is to use the go outstanding "select" Statement:

http. Handlefunc ("/", Func (Whttp. Responsewriter, R *http. Request) {transfer: = Transfer{sender: &me, Recipient: &you, Amount: -}/ * Attempt the transfer * /    result: = Make (chan int) go func (Transferchan chan<-Transfer, Transfer Transfer,resultchan<-int) {Transferchan <-transferresult<-1} (Transferchan, transfer,result) Select { Case<-result: FMT. fprintf (W,"I have $%d\n", Me. Cash) fmt. fprintf (W,"You have $%d\n", you. Cash) fmt. fprintf (W,"Total transferred: $%d\n", (you. Cash- -)) Case<- Time. After ( Time. Second *Ten): FMT. fprintf (W,"Your request had been received, but was processing slowly")    }})

This raises the event loop, waits no more than 10 seconds, waits for more than timeout time, returns a message to the user telling them that the request has been accepted, may take some time to process, please wait patiently, use this method we reduce the Dos attack possible, A truly robust system that can handle transfers concurrently and without using any locks is born.

Original

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.