A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
The goal of the introduction process is to make the multi-channel program perform concurrently in order to improve the capital application rate and the fragmented throughput, while the thread is introduced to reduce the time-space expenditure paid by the program in concurrent performance, and the progressive operation of the fragmented concurrency function.
The most straightforward understanding of threads is the "lightweight process", which is a fundamental CPU fulfillment unit and the smallest unit of program fulfillment flow, consisting of thread IDs, program counters, depository aggregates, and inns. The thread is an entity in the process, is the basic unit of the piecemeal self-transfer and distribution, the thread himself does not have the fragmented capital, only has a little in the operation must impossible less capital, but it can share the same process with other threads of the entire capital. A thread can create and revoke another thread, which can be performed concurrently between multiple threads in a unified process. Because of the mutual constraints between threads, the thread is running in a continuity. Threads also have three fundamental forms of readiness, congestion, and operation.
after the introduction of the thread, the extension of the process of the change, the process only as a unit of allocation of fragmented capital outside the CPU, the thread as the allocation unit of the disposition machine.
1) Dispensing. In the traditional operation and fragmentation, the fundamental unit with capital and self-adjustment is the process. Threading is the fundamental unit of self-dispensing, and the process is the fundamental unit of capital. During the unification process, the switching of threads does not provoke a process switch. Stopping a thread switch during a disagreement, such as switching from a thread within a process to a thread in another process, can cause a process switch.
2) with capital. Regardless of the traditional operation fragmented and still have a thread of operation fragmented, the process is mostly with the capital of the fundamental unit, and the thread does not have fragmented capital (there is a little bit of impossible capital), but the thread can visit its subordinate process of the fragmented capital.
3) concurrency. In the introduction of threading operations fragmented, not only the process can be performed concurrently, and multiple threads can also be performed concurrently, so that the operation fragmented with better concurrency, improved fragmented throughput.
4) Piecemeal expenditure. Because of the creation or revocation of the process, piecemeal to allocate or accept the takeover of capital, such as memory space, I/O equipment, and so the operation of fragmented expenses incurred in the creation or revocation of the thread. Similarly, when the stop process is switched on, it touches the storage of the CPU of the fulfillment process and the new transfer to the CPU of the process, while the thread switch only needs to keep and set up the bulk of the contents of the storage, the expenditure is very small. In addition, because multiple threads within a unified process share the address space of a process, synchronization and communication between these threads is easy to complete, even without the need to manipulate fragmented interference.
5) address space and other capital (such as open files): The process of the address space between each other, the unification process of each thread between the sharing process of capital, a process of thread about other process Ephesians.
6) Communication: Inter-process Communication (IPC) requirements process synchronization and mutual exclusion wrist assisted to ensure the divergence of data, and threads can directly read/write process data segments (such as global variables) to stop communication.
In the multi-threaded operation of the fragmented, the thread as a self-operation (or transfer) of the fundamental unit, at this time, the process is no longer a fundamental can be fulfilled entity. However, the process still has a coherent form with fulfillment, so-called process is in the "fulfillment" pattern, in practice it means that a thread is being fulfilled in the process. The secondary properties of the thread are as follows:
A thread is a lightweight entity that does not have fragmented capital, but each thread should have a single identifier and a thread master block, and the thread master block records the thread fulfillment of the store and stack, and other field patterns.
Divergent threads can perform the opposite procedure, that is, when a uniform service program is diverted by a divided user, the operations are fragmented to create divergent threads for them.
Each thread in the unification process shares the capital that the process has.
A thread is a self-dispensing unit of a disposition machine, and multiple threads can perform concurrently. In a single CPU computer fragmentation, each thread can take up the CPU, in the multi-CPU computer fragmented, each thread can occupy a divided CPU simultaneously, if each CPU at the same time as a process of each line Cheng can prolong the process of the disposal effort.
-When a thread is created, it begins its life cycle until it terminates, and the thread experiences various morphological changes such as congestion, readiness, and operation in the life cycle.
thread completion can be divided into two categories: User-level threads (User-levelthread, ULT) and kernel-level threads (Kemel-levelthread, KLT). Kernel-level threads are also known as kernel-supported threads.
In a user-level thread, all tasks related to thread governance are done by the use program, and the kernel recognizes that the thread is not present. The use of the program can be used in line libraries design into multithreaded programs. On weekdays, using a program from a single-threaded Cheng, starting in the thread, at any time during its operation, can be diverted from the thread library derived routines to create a new thread running in the opposite process. Figure 2-2 (a) illustrates the completion method of a user-level thread.
In a kernel-level thread, all thread-governance tasks are done by the kernel, and the program uses no code to stop threading, as long as a programming interface to a kernel-level thread is used. The kernel protects context information for each thread in the process and its external, and the transfer is done on the basis of the thread-based architecture of the kernel. Figure 2-2 (b) illustrates how a kernel-level thread is completed.
in some of the fragmented, the use of multi-threaded combination method is completed. Thread creation is complete in user space, and thread transfers and synchronizations are also stopped in the application. Multiple user-level threads in a usage program are mapped to some (less than or equal to the number of user-level threads) on a kernel-level thread. Figure 2-2 (c) illustrates the combination of user-level and kernel-level completion methods.
650) this.width=650; "src=" http://c.biancheng.net/cpp/uploads/allimg/140629/1-1406291220161Z.jpg "style=" border:0 px;width:600px;height:323px; "/>
Figure 2-2 User-level and kernel-level threads
There are some bits and pieces that support both the user thread and the kernel thread, which is a multi-threaded mold that completes the interface between the user-level thread and the kernel-level thread.1) Many-to-one die
To map multiple user-level threads to a kernel-level thread, thread governance is done in user space.
in this form, the user-level threads are fragmented impossible (ie, brightly lit) for operation.
Advantage: Thread governance is stopped in the user space, so the effectiveness is comparable to high.
flaw: When a thread is being choked by the service of the kernel, the whole process is congested, and multiple lines Cheng in parallel to the multi-disposal machine.
Map each user-level thread to a kernel-level thread.
Advantage: When one thread is congested, another thread is allowed to continue to perform, so concurrency is strong.
BUG: Every creation of a user-level thread requires the creation of a kernel-level thread that corresponds to the expense of creating a thread that can affect the use of the program.
Map N User-level threads to m kernel-level threads and request M <= N.
Features: In a multi-to-one mold and a one-to-two mold to take a compromise, to restrain a lot of the concurrency of a mold is not high defect, but also restrained a one-to-two mold user process takes up too many kernel-level threads, the expense is too big flaw. It also has many pairs of one mold and a single mold of their respective strengths, can be described as the director of both.
This article is from the "11999725" blog, please be sure to keep this source http://12009725.blog.51cto.com/11999725/1843717
Threading Concepts and multithreaded molds
Start building with 50+ products and up to 12 months usage for Elastic Compute Service