Thread pool in Go language (thread Pooling in Go programming)

Source: Internet
Author: User
Tags net thread
This is a creation in Article, where the information may have evolved or changed. After a period of go, I learned how to use a channel without a cache to create a goroutine pool. I like this implementation, which is even better than what this post describes. Despite this, the blog post still has some value for the part it describes. [Https://github.com/goinggo/work] (https://github.com/goinggo/work) # # Introduction (Introduction) in my career as a server developer, the thread pool has been the key to building robust code on the stack of Microsoft systems. Microsoft failed on. Net because it assigns a separate thread pool to each process and thinks it can be managed when they run concurrently. I have long realized that this is impossible. At least, it's not feasible on the server I'm developing. When I built the system with Win32 api,c/c++, I created an abstract IOCP class that could assign me a thread pool, and I threw the work to it (to handle). This works very well, and I can also specify the number of thread pools and the degree of concurrency (the number of threads that can be executed concurrently). In my time using C # development, I followed this code. If you want to know more, I wrote an article http://www.theukwebdesigncompany.com/articles/iocp-thread-pooling.php a few years ago. The use of IOCP brings me the performance and flexibility I need. By the way, the. Net thread pool uses the following IOCP. The idea of a thread pool is very simple. Work is sent to the server and they need to be processed. Most of the work is inherently asynchronous, but not necessarily. Most of the time, work comes from an internal process of communication. The thread pool joins the work, and then one of the threads in the pool is assigned to handle the job. Work is performed in the order in which they are received. The thread pool provides a good pattern for performing work efficiently. (imagine,) each time a job is needed, creating a new thread can put a heavy burden on the operating system and cause serious performance problems. So how do you adjust the performance of the thread pool? You need to find out how many threads the thread pool contains, and the work is handled the fastest. When all the threads are busy processing the task, the new task will stay in the queue. This is what you want because, in some ways, too many threads (instead) can cause the processing to become slower. There are several reasons for this, like CPU cores on a machine that need to be able to handle database requests (and so on). After testing, you can find the most appropriate value. I always first find out how many cores there are (CPUs on the machine) and the classes of work to be handledType. When the work is blocked, (they) are blocked on average for how long. On the stack of the Microsoft system, I found that for most jobs, running 3 threads per core can achieve the best performance. Go, I don't know the best numbers yet. You can also create different thread pools for different types of work. Because each thread pool can be configured, you can take the time to get the maximum output from the server. Command and control in this way is essential for maximizing server capabilities. In the Go language I do not create threads, but rather create a co-process. The co-function is similar to multithreaded functions, but the Go manages the threads that actually run at the system level. To learn more about concurrency in Go, check out this document: [Http://golang.org/doc/effective_go.html#concurrency] (http://golang.org/doc/effective_ go.html#concurrency). I created a package called Workpool and Jobpool. They realize the function of the pool through channel and go Association. # # Work Pool (Workpool) This package creates a go pool that is designed to handle the work being published to the pool. A separate Go process is responsible for working with queued processing. The process provides a secure work queue that tracks workload in the queue and reports errors when the queue is full. Submitting work to the queue is a blocking operation. This allows the caller to know whether the work has entered the queue. (Workpool also always) keeps a count of the number of active programs in the work queue. This is how to use the Workpool sample code: "Gopackage mainimport (" Bufio "" FMT "" OS "" Runtime "" StrConv "" Time "github.com/goinggo/ Workpool ") type MyWork struct {Name stringbirthyear intwp *workpool. Workpool}func (MW *mywork) DoWork (workroutine int) {FMT. Printf ("%s:%d\n", MW. Name, MW. Birthyear) fmt. Printf ("q:%d r:%d\n", MW. Wp. Queuedwork (), MW. Wp. Activeroutines ())//simulate some delaytime. Sleep (* time. Millisecond)}func Main () {runtime. GomaxProcs (runtime. NUMCPU ()) Workpool: = Workpool. New (runtime. NUMCPU (), Shutdown: = False//Race Condition, Sorrygo func () {for i: = 0; i < n; i++ {work: = MyWork {Name: "A" + StrConv. Itoa (i), birthyear:i,wp:workpool,}if err: = Workpool.postwork ("routine", &work); Err! = Nil {fmt. Printf ("ERROR:%s\n", err) time. Sleep (time.millisecond)}if shutdown = = true {return}}} () Fmt. Println ("hits any key to exit") Reader: = Bufio. Newreader (OS. Stdin) reader. ReadString (' \ n ') shutdown = truefmt. PRINTLN ("shutting Down") Workpool.shutdown ("routine")} "" Look at the main function, we have created a pool, and the number of threads is based on the number of cores on the machine. This means that each core should have a co-process. If each core is busy, you won't be able to do more. Once again, the performance test will detect which quantity is most appropriate. The second parameter is the size of the queue. In this case, I make the queue large enough (800) to ensure that all requests can come in. The MyWork type defines the state of work that I need to perform. We need the member function DoWork because it implements the interface of the Postwork call. This method must be implemented for any task to be passed to the thread pool. The DoWork method did two things. The first is that it shows the state of the object. Second, it reports the number of queues in real time and the number of concurrent executions of Go. These values can be used to check the health status of the thread pool and perform performance tests. Finally, a Go process specifically loops through the work to the work pool. At the same time, the work pool executes the DoWork method for each object in the queue. The go process eventually finishes, and the work pool continues to perform its work. At any time when we intervene, the program will gracefully stop. In this example program, the Postwork method can return an error. This is because the PoThe Stwork method will ensure that the task is placed in the queue or fails. The only reason for this failure is that the queue is full. (so) setting the length of the queue is an important consideration. # # Job Pool (Jobpool) Jobpool package is similar to Workpool package, except for one implementation detail. This package contains two queues, one is the normal processing queue, the other is the high priority processing queue. Blocked high-priority queues are always processed first than blocked normal queues. The use of both queues causes Jobpool to be more complex than Workpool. If you don't need high-priority processing, then using Workpool will be faster and more efficient. Here is how to use the Jobpool sample code: "Gopackage mainimport (" FMT "" Time "" Github.com/goinggo/jobpool ") type WorkProvider1 struct {Name String}func (WP *workprovider1) runjob (jobroutine int) {FMT. Printf ("Perform job:provider 1:started:%s\n", WP. Name) time. Sleep (2 * time. Second) fmt. Printf ("Perform job:provider 1:done:%s\n", WP. Name)}type WorkProvider2 struct {Name string}func (WP *workprovider2) runjob (jobroutine int) {FMT. Printf ("Perform job:provider 2:started:%s\n", WP. Name) time. Sleep (5 * time. Second) fmt. Printf ("Perform job:provider 2:done:%s\n", WP. Name)}func Main () {jobpool: = Jobpool. New (2, Jobpool.queuejob) ("main", &workprovider1{"Normal Priority:1"}, False) Fmt. Printf ("*******> QW:%d AR:%d\n", Jobpool.queuedjobs (), Jobpool.activerOutines ()) time. Sleep (1 * time. Second) Jobpool.queuejob ("main", &workprovider1{"Normal Priority:2"}, False) Jobpool.queuejob ("Main", & workprovider1{"Normal Priority:3"}, False) Jobpool.queuejob ("main", &workprovider2{"High Priority:4"}, True) Fmt. Printf ("*******> QW:%d AR:%d\n", Jobpool.queuedjobs (), Jobpool.activeroutines ()) time. Sleep (* time. Second) Jobpool.shutdown ("main")} "in this sample code, we created two worker types of structs. Each worker can be treated as a separate job in the system. In the main function, we created a job pool with 2 threads, supporting 1000 pending jobs. First we created 3 different WorkProvider1 objects and passed them to the queue, setting the priority flag bit to false. Next we create a WorkProvider2 object and pass it to the queue, setting the priority flag bit to true. Because there are 2 threads in the job pool, the first two jobs created will go into the queue and be processed. Once their tasks are completed, the next job is retrieved from the queue. The WorkProvider2 job will be executed because it is placed in a high-priority queue. To get the code for the Workpool package and Jobpool package, visit [Github.com/goinggo] (Github.com/goinggo) As always, I hope this code can help you a little bit in some ways.

Via:https://www.ardanlabs.com/blog/2013/05/thread-pooling-in-go-programming.html

Author: William Kennedy Translator: gogeof proofreading: polaris1119

This article by GCTT original compilation, go language Chinese network honor launches

This article was originally translated by GCTT and the Go Language Chinese network. Also want to join the ranks of translators, for open source to do some of their own contribution? Welcome to join Gctt!
Translation work and translations are published only for the purpose of learning and communication, translation work in accordance with the provisions of the CC-BY-NC-SA agreement, if our work has violated your interests, please contact us promptly.
Welcome to the CC-BY-NC-SA agreement, please mark and keep the original/translation link and author/translator information in the text.
The article only represents the author's knowledge and views, if there are different points of view, please line up downstairs to spit groove

490 reads ∙1 likes
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.