Process Management
The sequential execution and characteristics of the basic concept program of the process
Sequential execution of the program: only after the current operation (program segment) has been executed can the subsequent operation be performed.
The characteristics of sequence execution: Sequential, closed, can be good-bye sex.
Pre-trend Diagram
A forward graph (precedence graph) is a directed acyclic graph, recorded as a dag (Directed acycilc graph), used to describe the pre-and post-relationship of execution between processes. Each node in the diagram can be used to describe a program segment or process, or even a statement. A forward edge between nodes is used to represent the partial order (partial order) or the forward relationship (precedence Relation) "→" that exists between two nodes.
→={(Pi, Pj) | Pi must complete before PJ may start}, if (pi, PJ) ∈→, can be written as PI→PJ, said Pi is PJ's direct forward, and said PJ is the direct successor of Pi. In the pre-trend diagram, the node without the antecedent is called the initial node (Initial node), and the non-successor node is called the terminating node (Final node).
Concurrent execution of programs and their characteristics
- Concurrent execution of programs
Characteristics of program concurrency execution
Intermittent
Loss of closeness
Non-reproducible
Characteristics and definition of process characteristics and status processes
Structural Features:
Dynamic nature
Concurrency of
Independence
of Asynchrony
A more typical process definition:
A process is a single execution of a program.
A process is the activity that occurs when a program and its data are executed sequentially on the processing machine.
A process is a process that runs on a collection of data that is an independent unit of the system's resource allocation and scheduling.
Three basic states of the process
Ready status
Execution status
Blocking state
Pending status
Cause of suspend state:
Transition of process state
Activity ready--still ready
Active blocking--standstill blocking
Still ready--activity ready
Static blocking--active blocking
Process state diagram with pending status
Process Control block (PCB)
The role of the Process Control block: The role of the process control block is to make a program (with data) that cannot be run independently in a multi-channel program environment, to be a basic unit that can run independently, a process that can execute concurrently with other processes.
Information in the Process control block
Process identifier: The process identifier is used to uniquely identify a process. A process typically has two types of identifiers:
Internal identifier: In all operating systems, each process is given a unique numeric identifier, which is usually the ordinal of a process. Internal identifiers are set primarily for easy system use
External identifier: It is provided by the Creator, usually consisting of letters and numbers, often used by the user (process) when accessing the process. To describe the family relationship of a process, you should also set the parent process identity and the child process identity. In addition, you can set the user ID to indicate the user who owns the process
Processor Status: Processor status information is mainly composed of the contents of the various registers of the processor.
Scheduling information: In the PCB also contains some information related to process scheduling and process swapping, including:
Process state, indicating the current state of the process as a basis for process scheduling and swapping
Process priority, an integer that describes the priority level of the process using the processor, and high priority processes should be given priority to the processor
Other information required for process scheduling, which is related to the process scheduling algorithm used.
event, which is the event that the process waits to occur from the execution state to the blocking state, that is, the cause of the blockage.
Process Control Information
The address of the program and data refers to the memory or external memory of the process's program and data, so that when the process executes, it can find its program and data from the PCB.
Process synchronization and communication mechanisms, which are required to implement process synchronization and process communication, such as information queuing pointers, semaphores, etc., which may be placed in the PCB in whole or in part
The list of resources is a list of all the resources that are required for the process, except the CPU, and the resources that have been allocated to the process.
A link pointer that gives the first address of the PCB of the next process in the queue where the process (PCB) is located.
How process Control blocks are organized
PCB Link List queue:
Process graph for processes control process
Events that cause the creation process
- User Login
- Job scheduling
- Provision of services
- App Request
Creation of processes
- Request Blank PCB
- Assigning resources to a new process
- Initializing the Process Control block
- Insert the new process into the ready queue, and if the process-ready queue can accept the new process, insert the new process into the ready queue.
An event that causes a process to terminate when a process terminates
- Normal end in any computer system, there should be an indication that the process has finished running.
- The end of an exception forces the process to terminate due to certain errors and failures during process runs.
- Out-of-bounds error. This refers to the storage area accessed by the program, which is already out of the zone of the process;
- Protection is wrong. The process attempts to access a resource or file that is not allowed to be accessed, or is accessed in an inappropriate manner, for example, a process attempting to write a read-only file;
- Illegal instructions. The program attempts to execute a non-existent instruction. The cause of this error may be that the program mistakenly transferred to the data area, the data as an instruction;
- The privileged command was wrong. The user process attempted to execute a command that only allowed the OS to execute;
- Run timed out.
- Wait timeout.
- Arithmetic operation is wrong. The process attempts to perform a forbidden operation, for example, by 0;
- I/O failure. This is referred to as an error occurred during I/O
- External intervention does not mean that an exception occurred during the operation of the process, but that the process terminates the operation at the request of the outside world.
- operator or operating system intervention
- Parent Process Request
- Parent process Termination
The process of terminating a process
- The process's PCB is retrieved from the PCB collection based on the identifier of the terminated process, and the status of the process is read out from it.
- If the terminated process is in the execution state, the execution of the process should be terminated immediately, and the collocated dispatch flag is true to indicate that the process should be dispatched after it has been terminated
- If the process also has a descendant process, the process of all its descendants should be terminated to prevent them from becoming an uncontrolled process
- All resources owned by the process will be terminated, or returned to its parent process, or returned to the system
- Remove the terminated process (its PCB) from the queue (or list) and wait for other programs to collect the information
Process blocking and waking events that cause the process to block and wake up
- Requesting system Services
- Start an action
- New data not yet arrived
- No work to do
Process blocking Process
The process that is being executed, when it finds one of these events, is unable to continue execution, so it blocks itself by calling block primitives. It can be seen that the blocking process is an active behavior of the process itself. After entering the block process, because the process is still in the execution state at this time, it should immediately stop execution, the current state in the process control block from "execute" to block, and the PCB into the blocking queue. If multiple blocking queues are set up in the system that are blocked by different events, the process should be plugged into a blocking (waiting) queue with the same events. Finally, the scheduler is re-dispatched, the processor is assigned to another ready process, and switched, that is, to retain the processor state of the blocked process (in the PCB), and then according to the new process of the PCB processor state set the CPU environment.
Process Wake-up process
When an event is expected by a blocking process, such as I/O completion or the data it expects to have arrived, wake-up Primitives wakeup () are invoked by the process concerned (for example, a process that has exhausted and freed the I/O device), and the process waiting for the event wakes up.
The primitive is executed by first removing the blocked process from the blocking queue waiting for the event, changing the current state of its PCB from blocking to ready, and then inserting the PCB into the ready queue
Process suspend and activate
- Process hangs when an event is expected by a blocked process, such as I/O completion or the data it expects to have arrived, wake-up Primitive wakeup () is invoked by the process (e.g., a process that has exhausted and freed the I/O device) to wake the process waiting for the event. > Wake-Up Primitives is performed by first removing the blocked process from the blocking queue waiting for the event, changing the current state of its PCB from blocking to ready, and then inserting the PCB into the ready queue
- Process activation process When an event occurs for an activation process, such as a parent process or a user process request to activate a specified process, a process that is in a still-ready state on external memory can be swapped into memory if the process resides in an out-of-memory space that is already sufficient. At this point, the system activates the specified process using the activation primitive active (). > Activate the primitives First, the process from the external memory into memory, check the current state of the process, if still ready, it will be changed to active readiness, if the static block will be changed to active blocking. If the preemptive scheduling policy is used, each time a new process enters the ready queue, it should check whether to reschedule, that is, the scheduler will be activated by the process and the current process priority comparison, if the activation process is lower priority, do not have to reschedule, otherwise, immediately deprive the current process of running, Assign the processor to the process that was just activated.
Process synchronization
The primary task of process synchronization is to coordinate the execution order of multiple related processes so that the concurrent execution processes can effectively share resources and cooperate with each other, thus making the execution of the program reproducible.
Basic concepts of Process synchronization
- Two forms of restrictive relations
- Indirect mutual restriction relations, originates from the resource sharing.
- Direct interaction with each other stems from inter-process cooperation.
Critical Resources (Critical resouce) Many hardware resources, such as printers, tape drives, and so on, are critical resources, the process should be mutually exclusive, to achieve the sharing of such resources.
Producer-Consumer (producer-consumer) Problem: There is a group of producer processes that produce products and provide these products to consumer processes to consume. In order for the producer process to execute concurrently with the consumer process, a buffer pool with n buffers is set up between the two, and the producer process puts the products it produces into a buffer; the consumer process can take the product away from one buffer to consume. Although all producer and consumer processes are run asynchronously, they must be kept in sync, that is, consumers are not allowed to go to an empty buffer to fetch the product, and the producer process is not allowed to serve a product in a buffer that is already full and has not been taken away.
We can use an array to represent the above buffer pool with n (0,1,...,n-1) buffers. Use the input pointer in to indicate the next buffer to serve the product, and each time the producer processes a product, the input pointer is incremented by 1, and an output pointer out indicates the next buffer from which the product can be obtained, and the output pointer is incremented by 1 whenever the consumer process takes a product. Since the buffer pool here is organized into a cyclic buffer, the input pointer should be added 1 to in:= (in+1) mod n; the output pointer plus 1 is represented as out:= (out+1) mod n. When the (in+1) MoD n=out indicates that the buffer pool is full, and in=out indicates that the buffer pool is empty. In addition, an integer variable counter is introduced with an initial value of 0. Each time the producer process puts a product into the buffer pool, counter adds 1; Conversely, the counter minus 1 when the consumer process takes a product away from it. Producer and consumer two processes share the following variables:
Var n, integer;type item=…;var buffer:array[0, 1, …, n-1] of item;in, out: 0, 1, …, n-1;counter: 0, 1, …, n;
Pointers in and out are initialized to 1. In the description of the producer and consumer processes, No-op is an empty operation instruction, while the condition do no-op statement represents a repeating test condition (condication), and the repeat test should go to the condition to False (false), that is, until the condition is not established. A local variable NEXTP is used in the producer process to temporarily store the product that was just produced each time, while in the consumer process a local variable NEXTC is used to store each product to be consumed. Producer:repeat. Produce an item in NEXTP; ... while counter=n do no-op; BUFFER[IN]:=NEXTP; in:= (in+1) mod n; Counter: =counter+1; until false; Consumer:repeat while counter=0 do no-op; NEXTC: =buffer[out]; Out: = (out+1) mod n; Counter: =counter-1; Consumer the item in NEXTC; until false;
Although the above producer and consumer programs are correct when viewed separately, and the results are correct when executed sequentially, the problem is that the two processes share variable counter if they are executed concurrently. The producer adds 1 operations to it, and the consumer does it minus 1 operations, which are often described in the following form when implemented in machine language:
register 1:=counter; register 2:=counter;register1:=register 1+1; register 2:=register 2-1;counter:=register 1; counter: =register 2;
Suppose: The current value of counter is 5. If the producer process executes the three machine language statements in the left column before the consumer process executes the three statements in the right column, then the value of the last shared variable counter is still 5, whereas if you let the consumer process execute the three statements in the right column first, and then let the producer process execute the three statements on the left column, The counter value is also 5, however, if executed in the following order: Register 1: =counter; (register 1=5) Register 1: =register; (register 1=6) Register 2: =counter; (register 2=5) Register 2: =register 2-1; (Register 2=4) counter: =register 1; (counter=6) Counter: =register 2; (counter=4)
The correct counter value should be 5, but it is now 4. In order to prevent this error, the key to solving this problem is to treat the variable counter as a critical resource, that is, to make the producer process and the consumer process mutually exclusive access to the variable counter.
The Critical Zone (critical section) describes a cyclic process that accesses critical resources as follows:
Rules that the synchronization mechanism should follow
- Idle let in
- Busy is waiting
- Limited wait
- The right to wait
Semaphore mechanism Integer Signal volume
The integer semaphore was originally defined by Dijkstra as an integer quantity S that represents the number of resources, except for the initial barbarian, which can only be accessed by two standard atomic operations (Atomic operation) Wait (s) and signal (s). These two operations are called P, v operations, respectively. The wait and signal operations can be described as:
wait(S): while S≤0 do no-op; S:=S-1;signal(S):S:=S+1;
Recorded signal volume
In the semaphore mechanism, in addition to an integer variable value that represents the number of resources, you should also add a process-linked list L, which is used to link the wait process. The recorded semaphore is named because it uses the data structure of the record type. The two data items that it contains can be described as:
type semaphore=record value:integer; L:list of process; end
Accordingly, the wait (s) and signal (s) operations can be described as:
procedure wait(S) var S: semaphore; begin S.value∶ =S.value-1; if S.value<0 then block(S,L) endprocedure signal(S) var S: semaphore; begin S.value∶ =S.value+1; if S.value≤0 then wakeup(S,L); end
In the record-type semaphore mechanism, the initial value of the S.value represents the number of a class of resources in the system, and is called the resource semaphore, and each wait operation on it means that the process requests a unit of that class of resources, so it is described as s.value:=s.value-1; When s.value<0, indicates that the class resource has been allocated, so the process should call the block primitive, self-block, discard the processor, and insert into the semaphore list S.L. Visible, the mechanism follows the "right to wait" guidelines. The absolute value of s.value at this point indicates the number of blocked processes in the semaphore list. Each signal operation of the semaphore indicates that the execution process frees a unit resource, so the s.value:=s.value+1 operation represents the number of resources plus 1. If the addition of 1 is still s.value≤0, it means that in the semaphore list, there are still processes waiting for the resource to be blocked, so you should also call the wakeup primitive to wake up the first waiting process in the S.L list. If the initial value of the S.value is 1, it means that only one process is allowed access to the critical resource, at which point the semaphore is converted to a mutex semaphore.
and type signal quantity
The basic idea of the and synchronization mechanism is to assign all the resources that the process needs throughout the entire run, one at a time, to the process, and then release them together when the process is finished. As long as one resource fails to be assigned to the process, all other resources that may be assigned to it are not assigned to him. That is, the allocation of a number of critical resources, atomic operation mode: either all assigned to the process, or one is not allocated. It is known from the theory of deadlock that this can avoid the occurrence of the above deadlock condition. To do this, in the wait operation, an "and" condition is added, so called and synchronous, or the simultaneous wait operation, swait (simultaneous wait) is defined as follows:
Swait(S1, S2, …, Sn)if Si≥1 and … and Sn≥1 then for i∶ =1 to n do Si∶=Si-1; endforelse place the process in the waiting queue associated with the first Si found with Si<1, and set the program count of this process to the beginning of Swait operationendifSsignal(S1, S2, …, Sn) for i∶ =1 to n do Si=Si+1; Remove all the process waiting in the queue associated with Si into the ready queue.endfor;
Semaphore set
There are several special cases of the general "Semaphore set": (1) swait (S, D, D). At this point there is only one semaphore s in the semaphore set, but allows it to request a D resource each time, and is not allocated when the number of existing resources is less than D. (2) swait (S, 1, 1). At this point the semaphore set has been reduced to a general record-type semaphore (s>1) or a mutually exclusive semaphore (S=1). (3) swait (S, 1, 0). This is a very special and useful semaphore operation. When s≥1, multiple processes are allowed to enter a particular zone, and when s becomes 0, any process is prevented from entering a specific area. In other words, it is equivalent to a controllable switch.
Application of Signal Volume
- Using the semaphore to realize the process mutex Var mutex:semaphore:=1; Begin Parbegin Process 1:begin repeat wait (mutex); Critical section signal (mutex); remainder seetion until false; End Process 2:begin Repeat wait (mutex); Critical section signal (mutex); Remainder section until false; End Parend
- The use of semaphore to achieve a pre-relationship with two concurrent execution of the process P1 and P2.P1 in the statement S1;P2 has a statement S2. We want to execute S2 after S1 execution, in order to achieve this predecessor relationship, we just need to make the process P1 and P2 share a common signal value S, and give its initial value is 0, put the signal (s) operation behind the statement S1, and insert the wait (s) operation before the S2 statement, that is, in the process P1, with S1 : Signal (S); In the process P2, use Wait (S); S2; Since S is initialized to 0, so that if P2 performs a certain block first, only after the process P1 executes s1;signal (s), and when the operation causes S to increase to 1 o'clock, the P2 process can execute the statement S2 success.
Pipe process mechanism
Enhancement (Monitors): A new Process synchronization tool 1. The definition of a enhancement is made up of four parts:-The name of the enhancement, a shared data structure that is local to the tube, a set of procedures for manipulating the data, and a statement that sets the initial value of the shared information locally within the pipe.
The tube is equivalent to a fence, which encircles the shared variable and several processes that operate on it, and all processes must access the critical resource through the enhancement (equivalent to the gate through the fence) to enter, and the process is mutually exclusive by allowing only one process to enter the pipe at a time. Features of the enhancement:
- Modular
- Abstract data types
- Information masking
2. Condition variables consider a situation where a process invokes a pipe, is blocked or suspended in a pipe, and is unblocked until blocked or suspended, while during this period, if the process does not release a pipe, other processes cannot enter the pipe and are forced to wait for a long time. In order to solve this problem, the conditional variable condition is introduced. Access to these condition variables can only be performed in a pipe. Each condition variable in the tube must be described in the form: Var x,y:condition. The action on the condition variable is only wait and signal, so the condition variable is also an abstract data type, and each condition variable holds a list of all processes that are blocked by that condition variable, while providing two operations that are represented as x.wait and x.signal. Its meaning is
- X.wait: The process that is calling enhancement because the X condition needs to be blocked or suspended, the call x.wait inserts itself into the wait queue of the X condition and releases the pipe until the x condition changes. Other processes can use the pipe at this time.
- X.signal: The process that is calling enhancement discovers that the X condition has changed, call x.signal, and restart a process that is blocked or suspended because of an x condition. If there are multiple such processes, select one, and if not, continue with the original process without producing any results. This differs from the signal operation in the semaphore mechanism, because the latter always performs s:=s+1 operations, and thus always changes the state of the semaphore. If there is a process Q because the X condition is blocking, the process Q is restarted when the process p that is calling enhancement executes the x.signal operation. At this point two processes p and Q, how to determine the execution, which waits, can be handled in one of two ways:
- p wait until Q leaves the enhancement or waits for another condition
- Q Wait until p leaves the enhancement or waits for another condition
Process Communication
Semaphore mechanism as a synchronous tool is effective, but as a communication tool, is not ideal, mainly in two aspects:
- Low efficiency
- Communication is opaque to users
This section describes advanced process communication, which refers to a way for users to efficiently transfer large amounts of data using a set of communication commands provided by the operating system. The operating system hides the implementation details of process communication. That is, the communication process is transparent to the user, which greatly reduces the complexity of the communication programming.
Types of process Communication
At present, the advanced communication mechanism can be attributed to three main categories: Shared memory system, message delivery system and pipeline communication system. 1. Shared memory systems in shared memory systems (Shared-memory system), the processes that communicate with each other share some data structures or shared data stores, and processes can communicate through these spaces. 1. Communication mode based on shared data structure. In this mode of communication, the processes are required to share some data structures to realize the exchange of information between the processes. This mode of communication is inefficient and is only suitable for transmitting relatively small amounts of data. 2. Communication based on shared storage in order to transmit large amounts of data, a piece of shared storage is drawn up in memory, and the process can communicate by reading or writing the data in the shared storage area. Before communicating, the process requests the system to obtain a partition in the shared storage and specify the keyword of the partition, and if the system has assigned such a partition to another process, the descriptor of the partition is returned to the requester, followed by the requester to connect the acquired shared storage to the process; Write the common memory partition as read and write. 2. The messaging system (message passing system) is the most widely used communication mechanism among processes. In this mechanism, data exchange between processes is based on formatted messages (message), and the message is referred to as a message in the computer network. Programmers directly use the operating system to provide a set of communication commands (primitives), not only to achieve a large number of data transfer, but also hide the implementation details of the communication process is transparent to the user, thus greatly reducing the complexity of communication programming. 3. The so-called "pipeline" of pipeline communication refers to a shared file, also known as a pipe file, used to connect a read process and a write process to achieve communication between them. Provide input to pipelines (shared files)
Operation of the basic conceptual process of thread threading
- Create a process
- Undo Process
- Process switching
Properties of the Thread
- Lightweight entities
- The basic unit of independent Dispatch and dispatch
- can be executed in parallel
- Sharing process Resources
Status state parameters for threads
Each thread in the OS can be described with a thread identifier and a set of state parameters. State parameters usually have these items:
- Register state, including the contents of the program counter PC and the stack pointer;
- Stack, where local variables and return addresses are usually saved in the stack
- Thread run state, which describes what state the thread is running in
- Priority, which describes the priority program for thread execution
- Thread-specific memory for storing the thread's own copy of local variables
- Signal shielding, which is to shield certain signals
Thread Run state
When a thread is running, there are three basic states:
- Execution state, indicating that the line is impersonating to obtain the processor and run;
- Ready state, where the thread already has a variety of execution conditions that can be executed once the CPU is acquired
- Blocking state, where a thread is blocked during execution due to an event and is in a paused state
Creation and termination of threads
In a multithreaded OS environment, when an application is started, it usually has only one thread executing, and the thread is called an "initialization thread". It can then create a number of threads as needed. When creating a new thread, you need to create a function (or system call) with a thread and provide parameters such as the entry pointer to the thread's main program, the size of the stack, and the priority for scheduling. After the thread creation function finishes executing, a threading identifier is returned for later use. There are two ways to terminate a thread: one is to voluntarily exit after the thread has completed its work, and the other is to have an error in the run or to be forcibly terminated by another thread for some reason.
Processes in multi-threaded OS
In a multithreaded OS, a process is a basic unit of system resources, and typically processes contain multiple threads and provide resources for them, but at this point the process is no longer an executing entity. The processes in the multithreaded OS have the following properties: 1. Unit 2 as the system resource allocation. Can include multiple threads 3. The process is not an executable entity
Synchronization and communication between threads
- Mutex (mutex) mutex is a relatively simple mechanism to implement mutually exclusive access to resources between processes. Due to the low time and space unlocked of the operation mutex, it is more suitable for the key shared data and program segments with high frequency. Mutexes can have two states, the unlock (unlock) and the Unlock (lock) states. Accordingly, the mutex can be manipulated using two commands (functions). Where the unlock lock operation is used to close the mutex, the unlock operation unlock is used to open the mutex.
- Conditional variables Each condition variable is usually used with a mutex, that is, a condition variable is contacted when a mutex is created. A simple mutex is used for short-term locking, mainly to ensure mutual exclusion of critical areas. The condition variable is used for the thread's long waits until the waiting resource becomes available. The thread first performs a lock-off operation on the mutex, enters the critical section if it succeeds, and then finds the data structure that describes the state of the resource to understand the resource situation. As soon as the resource is found to be busy, the thread turns to wait and unlocks the mutex, waits for the resource to be released, if the resource is idle, indicates that the thread can use the resource, then sets the resource to busy and unlocks the mutex. The following is a description of the application of the above resources (left half) and the release (right half) operation: Lock mutex lock mutex check data structures; Mark resource as free; while (resource busy); Unlock mutex; Wait (condition variable); Wakeup (condition variable); Mark resource as busy; Unlock mutex;
- Semaphore mechanism
- Private Samephore when a thread needs to use semaphores to achieve synchronization between threads in the same process, a command that creates a semaphore can be called to create a private semaphore whose data structure is stored in the application's address space. The private semaphore is owned by a particular process, and the OS is unaware of the presence of a private semaphore, so that once the occupier of a private semaphore has ended abnormally or ends normally, but does not release the space occupied by that semaphore, the system will not be able to restore it to 0 (empty) or to the next thread that requests it.
- Public semaphore (public Samephore) common semaphores are set to achieve synchronization between threads in different processes or processes. Because it has an open name for all processes to use, it is called a common semaphore. The data structure is stored in the protected system storage area, which is allocated space and managed by the OS, so it is also called the system semaphore. If the semaphore's possessor does not release the common semaphore at the end, the OS automatically reclaims the semaphore space and notifies the next process. Visible, the common semaphore is a more secure synchronization mechanism. # #内核支持线程和用户级线程
Kernel support threads The so-called kernel support threads are also run under the support of the kernel, that is, the threads in the user process, the threads in the system process, their creation, undo, and switchover, are also dependent on the kernel. Also, in kernel space, a thread control block is set for each kernel support thread, and the kernel is aware of the existence of a thread based on that control block and controls it.
User-level threading user-level threads exist only in user space. There is no need to make use of system calls to create, undo, or synchronize and communicate between threads. Switching between user-level threads, often occurring between threads of an application process, also requires no kernel support. Because the rules of the switch are much simpler than the rules for process scheduling and switching, the switching speed of the threads is very fast. It can be seen that this thread is not kernel-independent.
Thread Control kernel support thread implementation of user-level threading
- runtime system, the so-called "runtime system", is essentially a collection of functions (procedures) for managing and controlling threads, including functions for creating and undoing threads, functions for thread synchronization and communication, and functions for implementing thread scheduling. Because of these functions, the user-level thread is not related to the kernel. All functions in the runtime system reside in user space and act as an interface between the user-level thread and the kernel.
- Kernel control threads This thread is also known as the lightweight process LWP (light Weight process). Each process can have more than one lwp, and, like a user-level thread, each LWP has its own data structure, such as the TCB, which includes thread identifiers, priorities, states, plus stacks and local storage. They can also share the resources owned by the process. LWP can obtain the service provided by the kernel through system calls, so that when a user-level thread is running, it has all the properties of the kernel support thread as long as it is connected to an LWP.
"Operating system" process management