This is a creation in Article, where the information may have evolved or changed.
Load Balancing
- Requester sends a request to the equalization service
type Request struct { fn func() int // The operation to perform. c chan int // The channel to return the result.}
Note that the returned channel is placed inside the request. Channel is first-class value
- The
- is good at simulating a requestor, a load generation
func requester (work chan<-Request) {c: = make (chan int) for {//Kill some T IME (fake load). Sleep (Rand. Int63n (Nworker * 2 * Second)) Work <-REQUEST{WORKFN, C}//Send Request Result: = <-c//Wai T for answer furtherprocess (Result)}}
request channel, plus some load record data type Worker struct {requests ch An Request/buffered channel pending INT//Count of pending Tasks index INT//IND Ex in the heap}
equalization service sends the request to the least stressed Worker func (w *worker) work (done Chan *worker) {for {req: = <-w.requests//Get Request from balancer req.c <-Req.fn ()//call FN and send result done <-W We ' ve finished this request}} The
request channel (W.requests) submits requests to each worker. The Equalization service tracks the number of requests to be processed to determine the load situation.
Each response is directly fed to its requestor.
- Defining a Load Balancer
// 负载均衡器需要一个装很多worker的池子和一个通道来让请求者报告任务完成情况。type Pool []*Workertype Balancer struct { pool Pool done chan *Worker}
- Load Balancing functions
func (b *Balancer) balance(work chan Request) { for { select { case req := <-work: // received a Request... b.dispatch(req) // ...so send it to a Worker case w := <-b.done: // a worker has finished ... b.completed(w) // ...so update its info } }}
- A load-balanced pool is implemented with a heap interface:
// 使用堆来跟踪负载情况func (p Pool) Less(i, j int) bool { return p[i].pending < p[j].pending}
- Dispatch:
// Send Request to workerfunc (b *Balancer) dispatch(req Request) { // Grab the least loaded worker... w := heap.Pop(&b.pool).(*Worker) // ...send it the task. w.requests <- req // One more in its work queue. w.pending++ // Put it into its place on the heap. heap.Push(&b.pool, w)}
- Completed
// Job is complete; update heapfunc (b *Balancer) completed(w *Worker) { // One fewer in the queue. w.pending-- // Remove it from heap. heap.Remove(&b.pool, w.index) // Put it into its place on the heap. heap.Push(&b.pool, w)}
A complex problem can be broken down into components that are easy to understand.
They can be processed concurrently.
The result is easy to understand, efficient, extensible, and usable.
Perhaps more parallel.