Linux System Process Understanding

Source: Internet
Author: User

1, in order to track and describe the process from generation to extinction, it is necessary to define various states of various processes and formulate corresponding state transition strategies to control the operation of the process.
Different operating systems interpret the state of the process differently, but the most basic state is the same. The following three types are included:
Running state: The process consumes CPU and runs on the CPU;
Ready state: The process already has operating conditions, but the CPU has not been allocated;
Blocking state: The process is temporarily unable to run because it waits for something to happen;
Process in a lifetime, are in the above 3 State one.
Here are 3 state transition graphs

Of course, in theory, the conversion between the above three states is divided into six kinds of situations;
Run---"ready: This is caused by scheduling, mainly because the process takes up CPU time too long
Ready---Run: the time slice of the running process runs out and the dispatch goes to the ready queue to select the appropriate process to allocate the CPU
Run---Blocked: an I/O request or wait for something to happen
Block---Ready: The event that the process waits for occurs and enters the ready queue
The above 4 kinds of situations can be converted to each other normally, is there still two kinds of situation?
Block-run: Even if the blocking process is allocated CPU and cannot be executed, the operating system download will not load the blocking queue to pick up, and its scheduled Selection object is the ready queue:
Ready-"blocked: Because the ready state is not executed at all, where to enter the blocking state?"


2, now know the process of three basic state, but the specific reality of the download operating system, the designer can be set according to the actual situation of different states, so since, there are several states:
Operational state: He is a combination of a run state and a ready state, indicating that the process is running or ready to run, and that this state is represented in Linux using the Task_running macro.
Shallow sleep state: The process is sleeping (blocked), waiting for the resource to arrive is wake up, or can be awakened by another process signal or clock interrupt, into the running queue. Linux uses task_interruptible macros to represent this state.
Deep sleep state: It is basically similar to shallow sleep, but one thing is that no other process signal or clock interrupt wakes up. Linux uses task_uninterruptible macros to represent this state.
Paused state: The process pauses execution to accept some sort of processing. If the process undergoing debugging is in this state, Linux uses the task_stopped macro to represent this state.
Zombie state: The process has ended but not freed Pcb,linux uses Task_zombie macros to represent this state.
We can look at the definition of the above macro in the kernel:
#define TASK_RUNNING 0
183 #define Task_interruptible 1
184 #define TASK_UNINTERRUPTIBLE 2
185 #define __TASK_STOPPED 4
186 #define __task_traced 8
187/* In Tsk->exit_state */Process exit status
188 #define Exit_zombie 16
189 #define Exit_dead 32
*/* in tsk->state again * * I understand the wake state of the process
191 #define TASK_DEAD 64
192 #define Task_wakekill 128
193 #define TASK_WAKING 256
194 #define TASK_STATE_MAX 512
195
196 #define TASK_STATE_TO_CHAR_STR "RSDTTZXXKW"


Here are the Linux inter-process state transitions and kernel call plots

3, the timing of the process scheduling trigger
The triggering of the dispatch mainly has the following situation:
1. The status of the current process (the process running on the CPU) becomes a non-executable state.
The process Execution system call actively becomes a non-executable state. such as performing nanosleep into sleep, execution exit exit, and so on;
The resource requested by the process is not satisfied and is forced into sleep state. For example, when performing a read system call, the disk cache does not have the required data, so that sleep waits for disk IO;
The process responds to a signal and becomes a non-executable state. such as response sigstop into the suspended state, response Sigkill exit, and so on;

2, preemption. When the process runs, it is not expected to be deprived of the CPU's use. This is done in two cases: the process has run out of time slices, or a higher priority process has occurred.
A higher-priority process is awakened by the impact of processes running on the CPU. Wake up when sending a signal, or be awakened by releasing a mutex (such as releasing a lock);
During the response to the clock interrupt, the kernel discovers that the time slice of the current process is exhausted;
The kernel wakes up when it responds to an outage by discovering that the external resources that the higher-priority process waits for are available. For example, the CPU receives the network card interrupt, the kernel handles the interrupt, discovers that a socket is readable, and then wakes the process that is waiting to read the socket, and then, for example, the kernel triggers the timer during the processing of the clock interrupt, which wakes up the corresponding process of sleep in the nanosleep system call;

4. Process scheduling algorithm

1. Time-Slice rotation scheduling algorithm

Time Slice is the time allotted to the process to run.

In the time-sharing system, in order to ensure the timeliness of human-computer interaction, the system enables each process to be carried out sequentially in turn, in which case the time slice rotation method should be used to dispatch. In the usual rotation, the system queues up all the operational (ie ready) processes on a first-come-first-served basis, allocating the CPU to the team's first process each time it is scheduled, and making it perform a single slice. The time slices vary in size from a few ms to hundreds of Ms. When the elapsed time slice runs out, the system signals that the scheduler is signaled to stop the execution of the process and sends it to the end of the run queue for the next execution. The processor is then assigned to the new team-first process in the ready queue, which also allows it to execute a time slice. This ensures that all processes in the queue are running, and that the processor execution time for a time slice can be obtained within a given period (the waiting time acceptable to the person).

2. Priority scheduling algorithm

In order to take care of the urgent process, we can get preferential treatment after entering the system, and introduce the highest priority scheduling algorithm. When the algorithm is used for process scheduling, the system will assign the processor to the highest priority process in the running queue, and the algorithm can be further divided into two ways.

(1) Non-preemptive priority algorithm (also known as nonpreemptive-scheduling)

In this way, once the system has allocated the processor (CPU) to the highest priority process in the running queue, the process continues until it is complete, or the system can assign the processor to another high-priority process because of an event that causes the process to abandon the processor. This scheduling algorithm is mainly used in batch processing systems, but also in some real-time systems where real-time requirements are not stringent.

(2) Preemptive priority scheduling algorithm (also known as preemptive-scheduling)

The essence of the algorithm is that the process currently running in the system is always the highest priority in the operational process. In this way, the system also assigns the processor to the highest priority (Weight,goodness () function) to perform the process. But as soon as another process with higher priority arises, the scheduler pauses the execution of the original top priority process, allocating the processor to the newly emerging process of highest priority, which deprives the current process of running. Therefore, when using this scheduling algorithm, whenever a new running process occurs, it will be compared with the current running process priority, if higher than the current process, will trigger the process scheduling. In this way, the priority scheduling algorithm can better meet the requirements of the urgent process, so it is often used in the demanding real-time systems, as well as in batch processing and timeshare systems with higher performance requirements. Linux also uses this scheduling algorithm.

3. Multilevel Feedback Queue Scheduling

This is one of the most fashionable scheduling algorithms nowadays. Its essence is: synthesis of time-slice rotation scheduling and preemptive priority scheduling advantages, namely: High priority process first run a given time slice, the same priority of the process in turn to run a given time slice.

4. Real-Time scheduling

Finally, let's take a look at scheduling in a real-time system. What is real-time system is that the system responds to external events as quickly as possible. There are several real-time processes or tasks in real-time systems, which are used to react or control some external events, often with some degree of urgency, so there are some special requirements for process scheduling in real-time systems. In real-time systems, preemptive scheduling is widely used, especially for those demanding real-time systems. Because this scheduling method has both greater flexibility and a very small scheduling delay, but this scheduling method is also more complex.

Linux System Process Understanding

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.