(Transferred from: https://blog.csdn.net/Jeanphorn/article/details/79018205)
Programme evolution
1. Direct use of Goroutine
With the support of the Go language native concurrency, we can directly use a goroutine (as follows) to process this request in parallel. However, this method obviously some bad place, we can not control the number of goroutine production, if the processing process is slightly time-consuming, in a single-level 100,000 QPS request, goroutine Large-scale outbreak, memory spikes, processing efficiency will quickly decline or even trigger a program crash.
go handle(request)
Goroutine cooperating with cached pipelines
We define a pipeline with a cache
var queue = make(chan job, MAX_QUEUE_SIZE)
And then a request from the process pipeline came in.
go func(){ for { select { case job := <-queue: job.Do(request) case <- quit: return } }}()
Receive request, send job for processing
job := &Job{request}queue <- job
True, this method uses a buffer queue to a certain extent to improve concurrency, but also a palliative, large-scale concurrency only delayed the occurrence of the problem. When the request is much faster than the processing speed of the queue, the buffer is quickly filled and the subsequent requests are blocked.
2.job Queue + Work pool
Using only the buffer queue does not solve the underlying problem, so we can refer to the thread pool concept and set a working pool (pooling) to limit the maximum number of goroutine. Each time a new job comes in, an available worker is removed from the working pool to perform the job. In this way, the controllability of Goroutine is ensured, and the concurrency processing ability is improved as much as possible.
Job Queue + work pool. png
Work Pool Implementation
First, we define a job interface, which is implemented by the specific job
type Job interface { Do() error}
Then define the job queue and work pool type, where we use the Golang channel.
// define job channeltype JobChan chan Job// define worker channertype WorkerChan chan JobChan
We maintain a global job queue and work pool, respectively.
var ( JobQueue JobChan WorkerPool WorkerChan)
The implementation of the worker. Each worker has a job channel, which is registered in the work pool when the worker is started. After startup, the job is taken through its own job channel and the job is executed.
type Worker struct { JobChannel JobChan quit chan bool}func (w *Worker) Start() { go func() { for { // regist current job channel to worker pool WorkerPool <- w.JobChannel select { case job := <-w.JobChannel: if err := job.Do(); err != nil { fmt.printf("excute job failed with err: %v", err) } // recieve quit event, stop worker case <-w.quit: return } } }()}
Implements a dispatcher (Dispatcher). The dispatcher contains an array of pointers to a worker, instantiates it at startup, starts the maximum number of workers, and then selects the available worker from the job queue to perform the job.
type Dispatcher struct {Workers []*worker quit Chan Bool}func (d *dispatcher) Run () {for i: = 0; i < maxworkerpoolsize; i++ {worker: = Newworker () d.workers = append (d.workers, worker) worker. Start ()} for {select {case job: = <-jobqueue:go func (Job job) {JOBCH An: = <-workerpool jobchan <-Job} (Job)//Stop dispatcher case <-d.quit : Return}}}