Background:
The previous post introduced the Leader/follower thread pool model used in Dcm4chee, the main purpose of which is to save context switching and improve operational efficiency. This blog is the "Dicom Open Source Library multithreaded Analysis" series, highlighting the threadpoolqueue thread pool used in fo-dicom.
Threadpoolqueue in fo-dicom:
Let's take a look at the custom data structure in the Threadpoolqueue code.
Public classthreadpoolqueue<t> {Private classWorkItem { PublicT Group; PublicAction Action; PublicWaitCallback Callback; Public ObjectState; }Private classWorkGroup { PublicT Key; Public ObjectLock =New Object(); Public volatile BOOLexecuting =false; PublicQueue<workitem> Items =NewQueue<workitem> (); Public WorkGroup(T key) {key = key; } }Private Object_lock =New Object();Private volatile BOOL_stopped =false;PrivateDictionary<t, workgroup> _groups; Public Threadpoolqueue() {_groups =NewDictionary<t, workgroup> (); Linger = $; Defaultgroup =default(T); } ...... }
As you can see from the above structure, the Threadpoolqueue custom thread pool queue is to group different threads according to type T and pass the corresponding Processing action agent (action and WaitCallback) together.
Compared to the traditional ThreadPool system thread pool, Threadpoolqueue creates a Dictionary object dictionary
Private void Execute(T Groupkey) {if(_stopped)return; WorkGroupGroup=NULL;Lock(_lock) {if(!_groups. TryGetValue (Groupkey, out Group))return; }Lock(Group. Lock) {if(Group. Executing)return;if(Group. Items.Count = =0&&!Group. Key.equals (Defaultgroup)) {_groups. Remove (Groupkey); System.Console.WriteLine ("Remove WorkGroup Key is {0}",Group. Key);return; }Group. executing =true; ThreadPool.QueueUserWorkItem (Executeproc,Group); } }
Contact other posts in the previous column, such as "Open Source Library implementation Anatomy" of the DICOM:DICOM3.0 Network communication Protocol, "Lf_threadpool in Dcm4chee" of dicom:dicom Open Source Library multithreading analysis, The overall response logic of the fo-dicom Open Source library for DICOM requests can be summarized as follows:
Where Threadpoolqueue is used to process PDATATF packets , which are p-data messages in the DICOM Upper layer protocol , For details, refer to the previous blog post on DICOM network transmission dicom:dicom3.0 Network Communication Protocol (iii), DICOM:DICOM3.0 Network Communication Protocol (continued), DICOM medical image processing: DICOM network transmission, and dicom medical Image processing: A comprehensive analysis of the Communication Service module in the DICOM3.0 standard, according to the MessageID of the message, the specified processing task is placed in the response grouping, control the whole message flow FIFO order execution. In addition, it will keep the thread pool of the default grouping only after the task is executed, and reduce the waste of system resources.
Threadpoolqueue Local Test Example:
To demonstrate that Threadpoolqueue has added a task FIFO sequence execution control flow on the. NET system threadpool thread pool, a simple test program is written locally, with the following demo code:
Private Staticthreadpoolqueue<string> ThreadPool =Newthreadpoolqueue<string> ();Private Static string[] groups =New string[] {"group-0","Group-1","Group-2","Group-3","Group-4"};Private Staticdictionary<string, list<int>> results =Newdictionary<string, list<int>> ();Private Static ObjectMutex =New Object();Static voidMain (string[] args) {ThreadPool. Defaultgroup ="group-0"; for(inti =0; I < -; ++i) {ThreadPool. Queue (groups[i%5], threadprocessing, i); Thread.Sleep ( -); } System.Console.ReadLine ();foreach(varResultinchResults. Keys.tolist ()) {System.Console.WriteLine ("Group {0}", result);foreach(varRecordinchResults[result]) {System.Console.Write ("Item={0}\t", record); } System.Console.WriteLine (); } System.Console.ReadKey (); }Private Static void threadprocessing(ObjectState) {intRecord = (int) state; Thread.Sleep (2* +);Lock(mutex) {list<int> recordlist =Newlist<int> ();if(!results. TryGetValue (Groups[record%5], outRecordlist)) {results. ADD (Groups[record%5],Newlist<int> ()); } Results[groups[record%5]]. ADD (record); } }
The local debug results are as follows:
Knowledge Point Supplement:
Whether it is the leader/follower thread pool model used in Dcm4chee described earlier, or the threadpoolqueue custom thread pool queue in fo-dicom that is described today, is a way to improve efficiency. Nowadays , the emergence of multicore, multiprocessor, and even distributed clusters makes task scheduling become particularly important. So figuring out the various concepts is a prerequisite for clarity. Macroscopic and microscopic are relative,
-in terms of threading and processes , thread concept category < process concept category, threads belong to micro, process is macro. A specific thread scheduling algorithm needs to be implemented within each process.
-in terms of process and operating system , process concept category < operating system concept category, internal operating system needs to implement the scheduling algorithm between the processes.
-For single core and multi-core , single core concept category < multi-core concept category, multi-core internal scheduling algorithm based on single core scheduling process needs to add multiple cores.
-In terms of single-machine and cluster , the concept category of single machine < cluster, the internal cluster needs to coordinate the state of each host.
Above each link, each level has mentioned the scheduling algorithm , its essence solves is the resource competition and the data synchronization , if two operation does not have any resources competition, even can say does not have any relations, that does not exist the dispatch, For example, two people in different companies get paid at the same time, but if they both go to a bank at a counter to find the same beautiful mm to save money, then they have to wait in line.
1. Thread:
- POSIX thread:
A single flow of control within A process. Each thread have its own thread ID , scheduling priority and policy , errno value , floating point environment , thread-specific key/value bindings , and the required system resources to support a flow of control. Anything whose address may is determined by a thread, including but not limited to static variables, storage obtained via malloc (), directly addressable storage obtained through implementation-defined functions, and automatic variables, is ACC Essible to all threads in the same process.
- Thread in MSDN:
Operating systems use processes to separate the different applications that they is executing. Threads is the basic unit to which a operating system allocates processortime, and more than one thre Ad can is executing code inside that process. Each thread maintains exception handlers, a schedulingpriority, and a set of structures the SY Stem uses to save the thread context until it is scheduled. The thread context includes all the information the thread needs to seamlessly resume execution, including the thread ' s SE T of CPU registers and stack, in the address space of the thread ' s host process.
From the above two criteria, it can be seen that the thread is the operating system scheduling, allocating the minimum unit of CPU time slices, which represents the specific control flow (that is, the instruction execution process). 2. Process:
- POSIX Process:
The POSIX model treats a "process" as an aggregation of system resources, including one or more threads Be scheduled by the operating system on the processor (s) it controls. Although a process has its own set of scheduling attributes, these having an indirect effect (if any) on the schedu Ling behavior of individual threads as described below.
- MSDN Process:
An application consists of one or more processes. A process, in the simplest terms, was an executingprogram. One or more threads run in the context of the process.
From the above two criteria, we can see that the process is the specific execution (ie executing program) that we usually write programs, which is the most unit of the operating system resource allocation system. 3. Concurrency VS Parallelism
Concurrency and parallelism is related concepts, but there is small differences. Concurrency means that, or more tasks is making progress even though they might not being executing simultaneously. This can for example is realized with time slicing where parts of tasks is executed sequentially and mixed with parts of Other tasks. Parallelism on the other hand arise when the execution can be truly simultaneous.
"Excerpt from": Akka.NET:Terminology and concepts
-Concurrency:
-Parallelism:
Parallelism is certainly not a problem, and concurrency's execution is not a single pattern in the above diagram, to be introduced after the "multi-core" and "multi-processor" concept and then compare the two concepts. 4. multi-core VS multi-processor
-
Multi-Core Processor:
A multi-core processor is A single computing component with both or MO Re independent actual processing units (called "Cores"), which is the units that is read and execute program instructions. The instructions is ordinary CPU instructions such as add, move data, and branch, but the multiple cores can run multiple Instructions at the same time, increasing overall speed for programs amenable to parallel computing.
A multi-core processor implements multiprocessing in A single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or could not be share caches, and they may implement message passing or Shared-memory inter-core Communi cation methods. Common network topologies to interconnect cores include bus, ring, two-dimensional mesh, and crossbar.
Excerpt from: wiki encyclopedia multi-core processor
-
multi-processor
Multiprocessing is the use of the and more central processing units (CPUs) W Ithin a single computer system. The term also refers to the ability of a system-to-support + than one processor and/or the ability to allocate tasks be Tween them. There is many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a fu Nction of how CPUs is defined (multiple cores on one die, multiple dies in one package, multiple packages in one system u NIT, etc.).
in a multiprocessing system, the all CPUs is equal, or some may is reserved for special purposes. A combination of hardware and operating system software design considerations determine the symmetry (or lack thereof) in A given system.
Excerpt from: wiki encyclopedia: multi-processor
Posts in Oracle concurrency vs Parallelism, Concurrent programming vs Parallel The concept of concurrency (concurrency) and parallelism (parallel) is also mentioned in programming, while multi-core (multicore) and multi-processor (multiprocessor) are also involved. The text mentions:
If Concurrent Threads is scheduled by the OS to run on one single-core non-smt non -CMP processor, you may get concurrency and not parallelism . Parallelism is possible on multi-core , multi-processor or distributed Systems .
Concurrency are often referred to as a property of a program, and are a concept more general than parallelism.
Interestingly, we cannot say the same thing for concurrent programming and parallel programm ing . They was overlapped, but neither was the superset of the other. The difference comes from the sets of topics the areas cover. For example, concurrent programming includes topic like signal handling, while parallel programming includes topic like me Mory Consistency model. The difference reflects the different orignal hardware and software background of the and programming practices.
The above shows that concurrency and parallelism concepts overlap but do not contain each other, so there is often confusion in understanding the concepts.
5. Load balancing:
With the advent of multi-core (multi-core), multiprocessor (multi-processor), and distributed clustering (distributed systems), the coordination between the parts (which mainly refers to the overall allocation of tasks, with specific threads, processes, Time slice scheduling algorithm is different) is also particularly important.
On SMP systems, it's important to keep the workload balanced among all processors to fully utilize the benefits of have More than one processor.
"Excerpt from": "Operating System Concepts, 9th Edition" 6th. 5.3 Subsection
6. Time Slice:
The period of time for which a process was allowed to run in a preemptive multitasking system is generally called the time Slice , or quantum . The scheduler is run once every time slice to choose the next process to run. The length of each time slice can is critical to balancing system performance vs process responsiveness-if the time Slic E is too short then the scheduler would consume too much processing time, but if the time slice is too long, processes would Take longer to respond to input.
An interrupt was scheduled to allow the operating system kernel to switch between processes when thei R time slices expire, effectively allowing the processor ' s time to be shared between a number of tasks, giving the Illusio n that it's dealing with these tasks simultaneously, or concurrently. The operating system which controls such a design is called a multi-tasking system.
Excerpt from: wiki Encyclopedia: preemption (computing) preemption
As can be seen from this section, all multi-threaded scheduling, multi-process scheduling, and even the coordination of distributed systems are ultimately dependent on time interruption (that is, time slices), hardware time interruption is the driving force of all the lowest level of scheduling.
Sample source code:
- CSDN Resources Download
- GitHub Resources Download
Note: Downloading the GitHub sample code is best to download the entire repository for fo-dicom.
[Email protected]
Date: 2016-02-05
Dicom:dicom Open Source Library multithreading analysis "Threadpoolqueue in fo-dicom"