Linux performance Optimization 1-process-related basics

Source: Internet
Author: User

1. Process-related knowledge points
1.1. What is a process?
A process can be seen as a copy of a program, and a process is an instance of a program's execution. A process can use any resource so that the Linux kernel can handle the task that completes it.
1.2. How the process is managed
All processes running on the Linux operating system are managed through the task_struct structure, also known as "process descriptors".
1.3. Process Descriptor Properties
A process descriptor contains the necessary information for a single process to run, such as the process identity, the properties of the process, the resources of the build process, and so on.
1.4. Process of creating and ending child processes
When a process creates a new process, the creation process (the parent process) issues a fork () system call, then the parent process gets a process descriptor for the newly created process (subprocess) and sets a new process ID. It copies the value of the process descriptor of the parent process to the child process. The entire address space of the parent process cannot be replicated at this time, and two processes share the same address space. The exec () system call copies the new program to the address space of the child process. Because two processes share the same address space, a new program writes data that causes page faults. For this, the kernel assigns a new physical page to the child process. This deferred operation is referred to as copy on Write. Typically a child process executes its own program, rather than performing the same work as the parent process. The child process is terminated by an exit () system call when the program execution is complete. The exit () system call frees most of the data structures of the process and sends out a stop signal to notify the parent process. The process at this point is called a zombie process. The child process is not completely removed until the parent process learns from the wait () system call that the child process has been terminated, and the parent process removes the data structure of all child processes and releases the process descriptor.
1.5. Status of the process
· Task_running (Running state)
In this state, the process is running on the CPU or waiting in the queue (running queue).
· task_stopped (Stop State)
In this state, the process is paused due to certain signals (SIGINT, SIGSTOP). The process waits for a recovery signal such as Sigcont
· Task_interruptible (interruptible sleep state)
In this state, the process is paused and waits for a condition to be satisfied. For example, the process waits for keyboard interrupts.
· Task_uninterruptible (non-disruptive sleep state)
When the process is in this state, it sends a signal to the process that it does not know anything about the operation. For example, the process waits for disk I/O operations.
· Task_zombie (Zombie State)
After a process exits through the exit () system call, its parent process should know that it has been terminated. In this state, a process is released while waiting for the parent process to notify it
All the data structures. A zombie process cannot terminate itself, in which case it appears as a Z-state. Using the KILL command is not going to kill such a process because it has
was identified as death. If you want to get rid of it, you can kill the parent process. However, if the child process of the INIT process is called a zombie process, the system must be restarted to get rid of it.
1.6. Basic status of the process
The process constantly changes the state of its operation while it is running. Typically, a process must have the following three basic states.
· Running (Running state)
When a process has obtained CPU resources, the process is executing on the CPU, at which point the state is called the execution state.
· Ready (State of readiness)
When a process has been assigned to all the necessary resources except the CPU, it can be executed immediately if the CPU resource is obtained, which is called the ready state.
· Blocked (blocked state)
A process that is being executed that cannot be executed because it waits for an event is called a blocking state at the moment. The factors causing the blocking state are: I/O wait, buffer request, wait signal, etc.

1.7. Memory segments of the process
Processes use their own memory address areas to perform their work. The changes in work depend on the current situation and the use of the process. A process can have different workloads and different required data sizes.
Processes can handle a wide variety of data sizes. The memory area of the process consists of the following segments:
• Text Segment
This area is used to store executable code.
• Data segment
The data segment consists of three regions: data, which stores the initialized data, such as static variables, BSS, which stores 0 of the initialized data, and the data is initialized to zero;
Heap, the area in which malloc () dynamically allocates memory as needed.
• Stack Segments
This area is a local variable, a function parameter, and the storage area of the returned stored function.
1.8. Priority and nice values for the process
The process priority is a number that determines the order of the CPU processing processes and can determine the static (real-time) priority and dynamic (non-real-time) precedence. A process with the highest priority has a greater chance of getting permissions to run on one processor. The highest static (real-time) Priority 99 corresponds to System priority 0, and the minimum static (non-real-time) priority 0 corresponds to System priority 99. These static (real-time) priorities, the system cannot dynamically change them. For dynamic (non-real-time) prioritization, the kernel needs to use an algorithm based on process behavior and characteristics to make dynamic adjustments up and down by 5. A process can indirectly change the static priority by using the Nice value of the process. Linux supports nice values from 19 (lowest priority) to 20 (highest priority). The default value is 0. The smaller the nice value the faster the process gets CPU run privileges.
1.9. What is context switching
During processor execution, information about the running process is stored in the processor's register or cache, and the execution of the process is loaded into the register's data set called the context. The context of the running process is stored in the switchover process, and then the context of the next process to run is restored to the register. The process descriptor and kernel-mode stack area are used to store the context. The process of this switchover is called context switching (contextual switching). There is generally not too much context switching because the processor flushes the registers and caches every time to free up space to the new process, which causes
Performance issues.
1.9. What is interruption
Interrupt processing is one of the highest priority tasks, and interrupts are usually generated by I/O devices such as network interface cards, keyboards, disk controllers, and so on. Interrupt processing is a Linux kernel notification event. It tells the kernel to interrupt process execution and to perform interrupt processing as quickly as possible because some devices need to respond quickly. When an interrupt signal reaches the kernel, the kernel must switch from the currently executing process to a new process to handle the interrupt. This means that the interrupt will cause a context switch. Interrupts are divided into two categories: hard interrupts and soft interrupts. Hard interrupts are generated by hardware devices and require fast response, and soft interrupts are used to handle tasks that can be postponed. In a multi-processor environment, interrupts are handled by each processor. Binding interrupts to a single processor can improve the performance of the system.
1.10. What is a thread
A thread is an execution unit that is produced in a process and runs in parallel with other threads in the same process. They share the same resources, such as memory, address space, open files, and so on. They can access data for the same set of applications. Threads are also referred to as lightweight processes. Because they share resources, each thread in them cannot change their shared resources at the same time, so mutex, lock, serialize, and so on are the mechanisms that the user application implements.
By reading location Teacher's masterpiece "Linux performance Optimization Master", and add their own understanding of the summed up knowledge points. If there is a similarity, it is purely coincidental. If you have rights, please notify me to delete.

Linux performance Optimization 1-process-related basics

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.