這是一個建立於 的文章,其中的資訊可能已經有所發展或是發生改變。
負載平衡
- 要求者向均衡服務發送請求
type Request struct { fn func() int // The operation to perform. c chan int // The channel to return the result.}
注意這返回的通道是放在請求內部的。通道是first-class值
- 能很好的類比一個要求者,一個負載產生者
func requester(work chan<- Request) { c := make(chan int) for { // Kill some time (fake load). Sleep(rand.Int63n(nWorker * 2 * Second)) work <- Request{workFn, c} // send request result := <-c // wait for answer furtherProcess(result) } }
請求通道,加上一些負載記錄資料type Worker struct { requests chan Request // work to do (buffered channel) pending int // count of pending tasks index int // index in the heap}
均衡服務將請求發送給壓力最小的workerfunc (w *Worker) work(done chan *Worker) { for { req := <-w.requests // get Request from balancer req.c <- req.fn() // call fn and send result done <- w // we've finished this request }}
請求通道(w.requests)將請求提交給各個worker。均衡服務跟蹤請求待處理的數量來判斷負載情況。
每個響應直接反饋給它的要求者。
- 定義負載平衡器
// 負載平衡器需要一個裝很多worker的池子和一個通道來讓要求者報告工作完成情況。type Pool []*Workertype Balancer struct { pool Pool done chan *Worker}
- 負載平衡函數
func (b *Balancer) balance(work chan Request) { for { select { case req := <-work: // received a Request... b.dispatch(req) // ...so send it to a Worker case w := <-b.done: // a worker has finished ... b.completed(w) // ...so update its info } }}
- 將負載平衡的池子用一個Heap介面實現:
// 使用堆來跟蹤負載情況func (p Pool) Less(i, j int) bool { return p[i].pending < p[j].pending}
- Dispatch:
// Send Request to workerfunc (b *Balancer) dispatch(req Request) { // Grab the least loaded worker... w := heap.Pop(&b.pool).(*Worker) // ...send it the task. w.requests <- req // One more in its work queue. w.pending++ // Put it into its place on the heap. heap.Push(&b.pool, w)}
- Completed
// Job is complete; update heapfunc (b *Balancer) completed(w *Worker) { // One fewer in the queue. w.pending-- // Remove it from heap. heap.Remove(&b.pool, w.index) // Put it into its place on the heap. heap.Push(&b.pool, w)}
一個複雜的問題可以被拆分成容易理解的組件。
它們可以被並發的處理。
結果就是容易理解,高效,可擴充,好用。
或許更加並行。