Linux Process Management Knowledge collation
1. What are the status of the process? What is the interruptible wait state for a process? Why wait for the scheduler to delete its task_struct structure after the process exits? What is the exit status of the process?
Task_running (Operational status)
Task_interruptible (interruptible wait state)
Task_uninterruptible (non-interruptible wait state)
Task_stopped (The process is set to a paused state by another process)
Task_traced (The process is set to a paused state by the debugger)
Task_dead (Exit status)
The process enters the wait queue because the required resources are not met, but the status can be interrupted by the signal. For example, when a running process enters an interruptible wait state due to disk I/O operations, the user can send a sigkill to the process before the I/O operation is complete, allowing the process to end the wait state prematurely, enter a running state, respond to Sigkill, execute the process exit code, Thus ending the process.
When a process exits (such as calling exit or returning from the main function), it needs to send a signal to the parent process, which needs to get information about the child process when the parent process is signaled, so the task_struct of the child process cannot be deleted at this time. In addition, each process has a kernel-state stack, and when the process calls exit (), it always uses the kernel-state stack before switching to another process, so when the process calls exit (), when the necessary processing is done, the state is set to Task_dead and switched to another process. When the process is successfully switched to another process, the process will not be dispatched because it is set to Task_dead, and then when the scheduler checks for a process with a status of Task_dead, the TASK_STRUCT structure of the process is removed. So the process is completely gone .
Exit_zombie (Zombie process): The parent process waits for the SIGCHLD signal sent at the end of the child process ( by default, the creation process sets a flag that sends a signal to the parent process when the process exits, unless a lightweight process is created ), at which point the child process exits. And the SIGCHLD signal has been sent, but the parent process has not been scheduled to run; Exit_dead (Zombie undo State): The parent process's exit signal to the child process is "not interested", or when the child process exits, the parent process calls through the waitpid () to wait for the sigchld signal of the child process.
2. Zombie Process
1) How to create a zombie process
When a process calls the exit command to end itself, it is not actually destroyed, but the process cannot be dispatched and is in the Exit_zombie state , and all the memory it occupies is the kernel stack, the thread_info structure, and the task_struct structure. at this point, the only purpose of the process is to provide information to its parent process, and if its parent process does not call wait or waitpid waits for the child process to end and ignores the signal without showing it, it remains in the Exit_zombie state .
2) How to view zombie process
Using the command PS, you can see that a process marked Z is a zombie process.
3) How to clean up the zombie process
- The parent process can call the Waitpid, wait function to end the child process
- Kill the parent process, the parent process dies, the zombie process becomes the "orphan process", adoptive to the Init process , the INIT process is always responsible for cleaning up the zombie process, and all of the zombie processes it produces disappear.
3. PID Management
In a Linux system, a PID structure is used to identify a process, and all process numbers (ie, PID: not the same as the previous PID structure) are managed by Pidmap bitmaps to find the target process faster. The advantages of the process are expressed by the PID structure: it is easier to manage than direct digital pid_t (when the process exits with high efficiency of PID recycle), the process takes up less space than the direct use of task_struct to identify processes.
The PID structure is as follows:
- struct PID
- {
- atomic_t count;
- int nr; /* Store PID values */
- struct Hlist_node pid_chain; /* Link the PID to the hash table */
- struct Hlist_head Tasks[pidtype_max];
- struct Rcu_head RCU;
- };
Because the default maximum PID is 32768 for 32-bit systems, because each of the Pidmap bitmaps indicates whether the PID is available, a total of 32,768 bits is required, exactly the size of a physical page (4*1024*8).
The PIDMAP structure is as follows:
- struct Pidmap {
- /*
- * This variable is used to count the number of bits in the physical memory of the structure corresponding to a page
- * is 0, that is, the number of idle PID
- */
- atomic_t Nr_free;
- void *page; /* This is the pointer to the memory page that holds the bitmap */
- };
Let's start by looking at the beginning of the Linux kernel in the Start_kernel function in the initialization function of the Pidmap bitmap pidmap_init as follows:
- void __init pidmap_init (void)
- {
- /* Request a page of physical memory and initialize to 0*/
- INIT_PID_NS.PIDMAP[0]. page = Kzalloc (page_size, Gfp_kernel);
- /* Set the No. 0 bit to 1, indicating that the current process uses PID 0, which is now process # NO. 0 */
- Set_bit (0, Init_pid_ns.pidmap[0].page);
- /* Update the value of Nr_free statistics idle PID at the same time */
- Atomic_dec (&init_pid_ns.pidmap[0].nr_free);
- Pid_cachep = Kmem_cache (PID, slab_panic);
- }
Then look at the start of the Linux kernel in the Start_kernel function in the initialization function of the PID hash table pidhash_init as follows:
- void __init pidhash_init (void)
- {
- int I, pidhash_size;
- /*
- *nr_kernel_pages represents the total number of kernel memory pages, which is the system DMA and normal
- * Total number of actual physical memory pages in the Save page area
- *megabytes: How many MB of kernel memory is counted
- */
- unsigned long megabytes = nr_kernel_pages >> (20-page_shift);
- /* from the following two lines of code you can see that pidhash_shift is between 4~12 */
- Pidhash_shift = Max (4, FLS (megabytes * 4));
- Pidhash_shift = min (n, pidhash_shift);
- Pidhash_size = 1 << pidhash_shift;
- PRINTK ("PID hash Table entries:%d (order:%d,%zd bytes) \ n",
- Pidhash_size, Pidhash_shift,
- pidhash_size * sizeof (struct hlist_head));
- /*
- * Pid_hash is requested in low-end physical memory by Alloc_bootmem, due
- The *pidhash_init function is called before the Mem_init function executes, so apply here
- * The memory is not recycled
- */
- Pid_hash = Alloc_bootmem (pidhash_size * sizeof (* (Pid_hash)));
- if (!pid_hash)
- Panic ("Could not alloc pidhash!\n");
- for (i = 0; I < pidhash_size; i++)
- /* Initialize the list of each table entry for each table */
- Init_hlist_head (&pid_hash[i]);
- }
Summary: The kernel maintains two data structures to maintain the process number PID, a hash table Pid_hash, and a bitmap pidmap. In Do_fork () each call Alloc_pid (), the first will be called ALLOC_PIDMAP () to modify the corresponding bitmap, the main idea of the function is: Last record assigned PID, the assigned PID is last+1, if the PID exceeds the maximum value, Then loop back to the original value (Reserved_pids) and test the PID on the pidmap for a bit 0 until it is found. Next, add an item to the Pid_hash table by using the HLIST_ADD_HEAD_RCU function.
4. Stack of processes
a process has two stacks: a user-state stack and a kernel-state stack. The space of the user-state stack points to the user address space, and the kernel-state stack's space points to the kernel address space.
When a process transitions from the user state (the process executes the user's own code) to the kernel state (the process executes the kernel code), the stack used by the process is switched from the user stack to the kernel stack because of the interrupt or system call.
the user stack switches to the kernel stack: after entering the kernel state, the user-state stack address is stored in the kernel stack, and the stack pointer register is set to the kernel stack address.
the kernel stack switches to the user stack: restores the user stack address stored in the kernel stack to the stack pointer register.
5,Linux under the difference between the process and the thread
1) process is the basic unit of resource allocation, thread is the basic unit of CPU dispatch
2) The process has a separate address space, the thread has its own stack and local variables, but there is no separate address space (the address space of the process is shared by threads within the same process)
6. Copy on write mechanism (copy on write)
To conserve physical memory, when calling fork () to generate a new process, the new process will share the same physical memory area as the original process ( call Clone () to establish a thread, and also share the virtual address space ), and the system will allocate a separate physical memory page for it only when one of the processes is writing. This is the write-time copy mechanism.
It is explained in detail as follows: when process a uses system call fork () to create a child process B, the child process B is actually a copy of parent process A, so it has the same physical page as the parent process. to save memory and speed up the target, the fork () function causes child process B to share the physical pages of parent process A in a read-only manner, while also making parent process A's access to these physical pages read-only , so that when either parent process A or child process B is on those shared physical pages when you perform a write operation, you will generate The page error is interrupted abnormally , and the CPU executes the system-supplied exception handler Do_wp_page () to resolve the exception, and Do_wp_page () cancels the share operation on the physical page that caused the write to abort, and copies a new physical page for the write process. Finally, when the exception handler is returned, the CPU will re-execute the write instruction that caused the exception, and the process continues.
Establishment of Process No. 7 and No. 0
Kernel startup "Manual" established the No. 0 process, that is, the swapper process, which is a kernel-state process , its page table swapper_pg_dir and kernel-state stack is established in the kernel boot, the process is defined as follows:
- struct task_struct init_task = init_task (init_task);
The various process resource objects of Init_task are initialized by the init_xxx process, and at the end of Start_kernel () the Kernel_thread () function is called by the Rest_init () function to swapper the process as "template" The Kernel_init kernel process is established, after which the process establishes the INIT process, executes the/sbin//-init file, and passes the boot process to the user state . The swapper process then executes the cpu_idle () function to yield the CPU, and then, if no ready process is scheduled to execute, the swapper process is dispatched, the Cpu_idle () function is executed , and the function calls the TICK_NOHZ_STOP_ Sched_tick () enters the tickless state.
8, the process of switching
1) Active switching
- The current process proactively makes I/O operations that can cause blocking, at which time the current process is set to wait, joins the waiting queue for the associated resource, and calls the schedule () function to yield the CPU.
- The process actively exits through the exit system call.
2) Passive switching
- Time Slice expires
- I/O interrupts wake up a higher priority process in an I/O waiting queue
Because these two situations usually occur in a clock interrupt or other I/O interrupt handler, and the interrupt context cannot block the process, usually the interrupt handler is dispatched by setting the NEED_RESCHED flag request, which is deferred to the interrupt return processing .
9. How the Linux system communicates between processes
pipe: A pipe is a half-duplex mode of communication in which data can only flow in one direction and can only be used among affinity processes (process affinity usually refers to parent-child process relationships).
named pipe (named pipe): A named pipe is also a half-duplex communication, but it allows for non-affinity interprocess communication .
Semaphore (Semophore): Semaphore is a counter that can be used to control access to shared resources by multiple processes. It is often used as a locking mechanism to prevent a process from accessing the shared resource while other processes are accessing the resource. Therefore, it is primarily used as a means of synchronization between processes and between different threads within the same process.
Message Queuing: Message Queuing is a list of messages that are stored in the kernel and identified by message queue identifiers. Message Queuing overcomes the disadvantages of less signal transmission information, only unformatted byte stream, and limited buffer size .
Signal (sinal): A signal is a more complex form of communication that notifies the receiving process that an event has occurred.
Shared Memory: shared memory is the mapping of memory that can be accessed by other processes, which is created by a process, but can be accessed by multiple processes. shared memory is the fastest IPC approach and is specifically designed for low-efficiency operation of other interprocess communication modes. It is often used with other communication mechanisms, such as signals, to achieve synchronization and communication between processes.
sockets: sockets are also an inter-process communication mechanism, unlike other communication mechanisms, which can be used for process communication between different hosts .
10. Linux process scheduling mechanism
1) What is scheduling
Select the most appropriate one to execute from the ready process
2) What knowledge points should be mastered in learning scheduling
- Scheduling policy
- Scheduling time
- Scheduling steps
3) Scheduling policy
Sched_normal: The normal process
Sched_fifo: First-in, first-out real-time processes
SCHED_RR: The real-time process of the temporal slice rotation
4) Scheduler Class
It is divided into CFS scheduling class and real-time scheduling class.
- The CFS scheduling class is for normal processes, and the method used is to completely discard the time slices and allocate the specific weighting to the process of a processor.
- Real-time scheduling classes are divided into Sched_fifo and SCHED_RR.
Sched_fifo implements a simple, first-in, first-out scheduling algorithm: It does not use time slices, can be executed continuously, only the higher priority of the SCHED_FIFO or SCHED_RR task to seize the Sched_fifo task. If there are two or more sched_fifo processes with the same priority, they will be executed in turn, but will still exit only if they are willing to give up the processor.
SCHED_RR is roughly the same as Sched_fifo, except that the SCHED_RR-level process can no longer be executed after exhausting the time allotted to it beforehand.
5) Timing of scheduling
Call schedule () directly in the kernel: when a process needs to wait for a resource and temporarily stops running, the state of the process
Set the wait state and actively request the dispatch, yielding the CPU.
Example:current->state=task_interruptible;
Schedule ();
user preemption: when the kernel is about to return to user space, if the need_resched flag is set, it will cause schedule () to be called and user preemption will occur.
kernel preemption: as long as the rescheduling is secure, the kernel can preempt the tasks being performed at any time.
6) Scheduling steps
- Clean up the current running process
- Select the next process to run
- Set the run environment for a new process
- Process Context Switch
Linux Process Management issues
1. Why does the call fork () function return two times ?
This is because in the Do_fork->copy_process->copy_thread function, the start address of the user-state stack of the child process is set to the start address of the parent process's user-state stack, so that when the parent-child process returns from the kernel state to the user state, Returns the same address, which explains why the fork was returned two times at a time .
2, why to set in the task_struct mm and active_mm two mm_struct members?
This is because the kernel thread does not have a user-state address space, so its mm is set to null, but because the address of the page directory is stored in the MM structure, when switching from other processes to this kernel-state thread, the scheduler may need to switch the page table, which adds a active_mm, For a mm-null kernel-state thread, the mm_struct of the other process is borrowed, that is, it points to the MM structure of the other process, and when the process is switched, the uniform use of active_mm is possible. But does the other process have its own separate page table? This is not a problem because the kernel-state thread uses only the kernel address space.
3, there are the following: 1.task_struct's mm member is used to describe the 3GB user-state virtual address space; 2. The kernel thread can access the kernel address space by borrowing the page table in mm of the last user process that was called. If so, can the task_struct mm member describe the 1GB kernel address space? If not, why would there be 2?
the task_struct mm member cannot describe the 1GB kernel address space, just because the MM member holds the page directory information pgd_t, and all processes share the 1G kernel State address space, so you can access the kernel address space using the page table in mm of the previous user process . (What do you mean?) )
4. Why are all processes sharing 1G kernel State address space ?
because fork () copies the task_struct structure of the current process, and the MM structure is copied for the new process. the Page Table entry (page directory entry) for the 3GB~4GB kernel-state virtual address of the current process is copied to the Process's page table entry (page directory entry), so all processes share the 1G kernel State address space. However, for a user-configured virtual address area, its Process Page table entry (page directory entry) is set to read-only so that when one of the processes writes to it, Do_page_fault () allocates a new physical page and establishes the mapping to implement the cow mechanism.
5. The parent process asks the child process to send a signal when it exits, then the parent process asks the child thread to send a signal when it exits? Why?
The parent process does not require a child thread to send a signal when it exits because the child thread shares some of the resources of the parent process , so the parent process is not required to obtain the information, and the parent process is not required to send a signal. This can be in do_fork->copy_process
P->exit_signal= (Clone_flags & clone_thread)? -1:(clone_flags &csignal);
and do_exit->exit_notify
if (tsk->exit_signal! =-1 && thread_group_empty (tsk)) {
int signal = Tsk->parent = = tsk->real_parent? tsk->exit_signal:sigchld;
Do_notify_parent (tsk, signal);
} else if (tsk->ptrace) {
Do_notify_parent (tsk, SIGCHLD);
}
See it.
6. Why does the child process become a zombie process if the parent process does not call wait to end the child process when the child process exits?
The analysis is as follows: In the kernel source code, there are the following codes:
Do_exit->exit_notify->
State = Exit_zombie
if (tsk->exit_signal = =-1 &&
(Likely (tsk->ptrace = = 0) | |
Unlikely (Tsk->parent->signal->flags & Signal_group_exit)))
state = Exit_dead;
Tsk->exit_state = State;
Description if a signal is sent to the parent process when a child process is defined to exit, the process state is set to Exit_zombie, otherwise exit_dead. When the child process exits, it is bound to send to the parent process
Signal, so the status of the process is Exit_zombie, and if the parent process calls wait for the child process to end, the Do_wait->wait_task_zombie function can set the status of the process to exit_dead. and frees the kernel stack resource of the process, and finally releases its task_struct struct by put_task_struct. Otherwise, the child process becomes a zombie process.
Linux Process Management Knowledge collation