This is a creation in Article, where the information may have evolved or changed.
This article original article, reprint annotated source, blog address https://segmentfault.com/u/to ... The first time to see the following wonderful articles. Feel good, and share to friends Circle, thank you for your support.
I am currently in a state of unemployment, before the Golang how to deal with high concurrent HTTP requests confused, these days also checked a lot of related blogs, indefinitely, do not know how to write the specific code
In the afternoon in the developer headline app saw an article of foreign technical personnel with golang processing every minute millions request, read the code in the article, wrote the code himself, below himself write down their own experience
Core points
Put the request into the queue, by a certain number (for example, CPU cores) Goroutine compose a worker pool (pool), the worker read queue in the Workder pool performs tasks, ideally, all cores of the CPU perform tasks in parallel
Instance Code
The following code of the author based on their own understanding of the simplification, mainly to express personal ideas, the actual back-end development, based on the actual scene modification
Func dotask () {//Time-consuming hype (analog). Sleep (1 * time. Second)}//here a simulated HTTP interface, each request is abstracted to a jobfunc handle () {for i: = 0; i <; i++ {job: = job{} Jobqueue < -Job}}var (Maxworker = runtime. NUMCPU ()//cpu core number is the maximum number of parallel, the worker is redundant and can not speed up the execution of the task, but will increase the burden of switching maxqueue = 200000) type Worker struct {quit Chan Bool}func Newworker () worker {return worker{quit:make (chan bool)}}//Start method starts the run loop for the Worker, L Istening for a quit channel in//case we need to stop Itfunc (w Worker) Start () {go func () {-S} Elect {case <-jobqueue://We had received a work request. Dotask () Case <-w.quit://We had received a signal to stop return }}} ()}//stop signals the worker to stop listening for work requests.func (W worker) Stop () {go func () { W.quit <-True} ()}type Job struct {}var jobqueue chan Job = Make (chan Job, maxqueue) type Dispatcher struct {}func newdispatcher () *dispatcher {return &dispatcher{}}func ( D *dispatcher) Run () {//starting n number of workers for I: = 0; i < Maxworker; i++ {worker: = Newworke R () worker. Start ()}}
The author believes that the CPU core number is the maximum number of concurrent programs run, so the number of workers more than the number of CPU cores, and does not improve task execution speed
The above is only the author's personal views, do not know golang concurrent programming understanding is correct, there is the wrong place, I hope the master pointing twos, in this thanked