http://blog.csdn.net/gatieme/article/details/51892437
Before the kernel thread, lightweight process, user thread three thread concept of confusion (thread ≠ lightweight process), but always on the thread mentioned in the implementation of the model is confusing, this time spent a little time how to learn all of a sudden
3 ways to implement 1 threads
In the traditional operating system, the basic unit with the resources and The independent dispatch is the process. In the operating system that introduces the thread, the thread is the basic unit of the Independent dispatch, and the process is the basic unit of the resource. In the same process, the switch of the thread does not cause the process to switch. Thread switching in different processes, such as switching from one in-process thread to another in a process, causes the process to switch
Depending on whether the operating system kernel is thread-aware, thread can be divided into kernel threads and user threads
name |
Description |
User-level threads (User-levelthread, ULT) |
Implemented by application-supported threads, the kernel is unaware of user-level threading implementations |
Kernel-level threading (Kemel-levelthread, KLT) |
Kernel-level threads are also known as kernel-supported threads |
In multi-threaded operating system, the implementation of each system is not the same, in some systems to implement the user-level thread, and some systems to implement the kernel-level threading
In some cases, the kernel-level thread is also called a lightweight process (LWP), but this is an unprepared description, in fact, the term LWP is borrowed from the SVR4/MP and Solaris 2.x systems, some systems call LWP a virtual processor, the reason is called a lightweight process may be, With the support of kernel threads, LWP is a separate dispatch unit, just like a normal process. So the biggest feature of LWP is that each LWP has a kernel thread support
2 User-level threading (many-to-one model) 2.1 thread implementation method for user-level threads
In a user-level thread ,
All the work on thread management is done by the application, and the kernel is unaware of the thread's existence. Applications can be designed as multi-threaded programs by using line libraries. Typically, an application starts from a single thread, begins running in it, and at any point in its run can create a new thread that runs in the same process by invoking a derived routine in the thread library.
User-level threads exist only in user space, and the creation, revocation, synchronization and communication between threads do not need to be implemented with system calls. User processes use line libraries to control user threads. Because the thread in-process switch rules are far simpler than the rules of process scheduling and switching, no user-state/kernel-mindset switching is required, so the switching speed is fast. Since the processor time slice allocation here is based on the process as the basic unit, the time per thread execution is relatively reduced in order to add thread support to the operating system, using the addition of the runtime in user space to implement the thread, these runtimes are called "Thread packages" and user threads are not perceived by the operating system. User threads are seen in a number of historic operating systems, such as UNIX operating systems
User-level threads reside in user space or mode. The run-time library manages these threads, which are also in user space. They are not visible to the operating system and therefore cannot be dispatched to the processor core. Each thread does not have its own thread context. Therefore, in the case of simultaneous execution of threads, each process can only have one thread running at any given moment, and only one processor core will be assigned to that process. There may be thousands of user-level threads for a process, but they have no effect on system resources. The run-time library dispatches and dispatches these threads.
Describes how user-level threads are implemented,
As you can see in the diagram, the Library Scheduler selects a thread from multiple threads in the process, and then the thread is associated with a kernel thread that the process allows. The kernel thread will be assigned to the processor core by the operating system scheduler. A user-level thread is a "many-to-one" thread mapping
2.2 Features of user-level threading
The kernel is ignorant of thread packs. From the kernel perspective, it is managed in a normal way, that is, a single-threaded process (there is a runtime system)
2.3 Benefits of user-level threading
The advantages of a user thread are mainly
Can be implemented in an operating system that does not support threading.
Thread management, such as creating and destroying threads, thread switching costs, and so on, is much less expensive than kernel threads because the process of saving thread state and the calling program are just local procedures
Allow each process to customize its own scheduling algorithm, thread management is more flexible. This is the difference between a kernel thread and a hypervisor that must be written by itself
A thread can take advantage of the tablespace and stack space multithreading than the kernel-level line
No traps, no context switches, no need to flush the memory cache, making thread calls very fast
- Thread scheduling does not require the kernel to participate directly, the control is simple.
2.4 Disadvantages of the user thread
The disadvantages of the user thread are mainly
When a thread is blocked by an I/O or a page fault, the kernel will block the entire process and block all threads because it does not know that there is a multi-threaded system call, so only one thread in the same process can run
Page failures can also cause similar problems.
Within a single process, there is no clock interruption, so it is impossible to schedule threads in a round-robin manner
Resource scheduling According to the process, multiple processors, a thread in the same process can only be reused under the same processor
Add
In a user-level thread, the thread table in each process is managed by the runtime system. When a thread transitions to a ready state or blocking state, the information required to restart the thread in the thread table is exactly the same as the information that the kernel holds in the process table
Kernel-level threading implementation for 3 kernel-level thread 3.1 threads
In a kernel-level thread ,
Kernel thread creation and destruction are performed by the operating system and by system calls. Running under the support of the kernel, whether it is the thread of the user process or the thread of the system process, their creation, revocation, and switchover are all dependent on the kernel.
All the work of thread management is done by the kernel, and the application has no thread-managed code, only one programming interface to the kernel-level thread. The kernel maintains contextual information for each thread in the process and its internal, and the dispatch is done on the basis of the thread-based architecture of the kernel. Figure 2-2 (b) illustrates how a kernel-level thread is implemented.
Kernel threads reside in kernel space, and they are kernel objects. With kernel threads, each user thread is mapped or bound to a kernel thread. The user thread is bound to the kernel thread for its lifetime. Once the user thread terminates, two threads will leave the system. This is called a "one-to-one" thread mapping,
Thread creation, revocation, and switching are all required to be implemented directly by the kernel, that is, the kernel understands each thread that is a scheduled entity
These threads can compete for resources across the system
A thread control block (TCB) is set up within the kernel space for each kernel support thread, and the kernel is aware of the presence of the thread and controls it based on that control block.
, which is how kernel-level threads are implemented, and each user thread is directly associated with a kernel-line threads.
The operating system scheduler manages, dispatches, and dispatches these threads. The runtime library requests a kernel-level thread for each user-level thread. The memory management and scheduling subsystem of the operating system must take into account the large number of user-level threads. You must know what the maximum number of threads per process is allowed. The operating system creates a context for each thread. Each thread of a process can be assigned to the processor core when the resource is available.
3.2 Features of kernel threads
It makes a system call when a thread wants to create a new thread or undo a wired thread
3.3 Advantages of kernel threading
Benefits of Kernel Threading:
In multiprocessor systems, the kernel can execute multiple threads within the same process in parallel
If one of the threads in the process is blocked, the ability to switch other threads within the same process to continue execution (a disadvantage of user-level threading)
All calls that can block threads are implemented in the form of system calls, at a considerable cost
When a thread is blocked, the kernel chooses the threads that can run another process, while the runtime system always runs threads in its own process, while the user space implements the thread
The signal is sent to a process rather than a thread, and when a signal arrives, which thread should handle it? Threads can "register" the signals they are interested in
4 Combination Mode
In some systems, thread creation is done entirely in user space using a multi-threaded implementation of a combination, and the scheduling and synchronization of threads is also done in the application. Multiple user-level threads in an application are mapped to some (less than or equal to the number of user-level threads) on a kernel-level thread.
Illustrates a combination of user-level and kernel-level implementations in which each kernel-level thread has a set of user-level threads that can be used in turn
POSIX thread scheduling is a hybrid model that is flexible enough to support user-and kernel-level threads in a standard-specific implementation. The model includes level two scheduling – line Cheng and kernel entity level. The thread level is similar to a user-level thread, and kernel entities are dispatched by the kernel. The line libraries determines how many kernel entities it requires, and how they are mapped.
POSIX introduces the concept of a thread scheduling competition range (thread-scheduling contention scope), this. The concept gives programmers some control over how to map kernel entities to threads. The Contentionscope property of a thread is either pthread_scope_process or Pthread_scope_system. The thread with the Pthread_scope_process attribute competes with the processor resources for other threads in the process in which it resides. Threads with the Pthread_scope_system attribute are much like kernel-level threads, and they compete for processor resources in a system-wide context. A POSIX mapping method binds Pthread_scope_system threads to kernel entities.
When a kernel-level thread is created, the thread property Pthread_scope_system is set, and the code is as follows:
//设置内核级的线程,以获取较高的响应速度//创建线程ret = pthread_create(&iAcceptThreadId, &attr, AcceptThread, NULL);
Two values are defined in the POSIX standard:
Pthread_scope_system and Pthread_scope_process, which represent the competing CPU time with all the threads in the system, which indicates that the CPU is only competing with threads in the same process
The default is pthread_scope_process. Currently, Linuxthreads only implements the Pthread_scope_system value.
The binding of threads involves another concept: light processes (lwp:light Weight process). A light process can be understood as a kernel thread, which is located between the user layer and the system layer. The system's allocation of thread resources, the control of threads is achieved through a light process, and a light process can control one or more threads. By default, how many light processes are started and which light processes control which threads are controlled by the system, which is called Unbound. Binding state, the name implies that a thread fixed "tied" on a light process. The bound thread has a high response speed because the CPU time slice is scheduled for a lightweight process, and the bound thread can guarantee that it always has a light process available when needed. By setting the priority and dispatch level of the lightweight process being bound, the bound thread can satisfy requirements such as real-time response.
The function that sets the thread binding state is Pthread_attr_setscope, which has two parameters, the first is a pointer to the property structure, the second is the binding type, and it has two values: Pthread_scope_system (bound) and Pthread_scope _process (non-binding).
5 The difference between a user-level thread and a kernel-level thread
Kernel support threads are perceived by the OS kernel, while user-level threads are not perceived by the OS kernel.
The creation, revocation, and dispatch of user-level threads do not require the support of the OS kernel, which is handled at the level of language (such as Java), while kernel support for thread creation, revocation, and dispatch requires the OS kernel to provide support, and is generally the same as process creation, revocation, and scheduling.
When a user-level thread executes a system call instruction, it causes its owning process to be interrupted, while the kernel support thread executes the system call instruction, causing the thread to be interrupted only.
In a system that has only a user-level thread, the CPU is dispatched in a process that is in a running state, and the user program controls the rotation of the thread, and in a system with kernel support threads, the CPU is dispatched in threads, and the thread scheduler of the OS is responsible for the thread's dispatch.
The program entity of a user-level thread is a program that runs under a user-state, while a kernel-enabled program entity is a program that can run in any state.
3 Ways to implement threads--kernel-level threads, user-level threads, and hybrid threads