Introduction to Linux Kernel Engineering--process

Source: Internet
Author: User
Tags mutex semaphore cpu usage

The concept of process overview process is not present at the earliest. In the present case, it is a single-process operating system. But the people at that time didn't think so. Is it not normal for a software that runs on a board to run a pipeline? But as business logic gets more complex, people are increasingly demanding that a board do multiple things at the same time, so people can figure out how to simulate multiple pieces of code that are executing at the same time. The reason for this is that the CPU is really only single-core at that time, saying that the CPU executes multiple code at the same time beyond the physical limit. Therefore only the time slices of the CPU are segmented to create false parallelism. Even now that there is a physical multicore, the CPU's time slices are sliced to create a parallel phenomenon that exceeds the number of CPUs. The solution to this demand brings both favorable and unfavorable results. It is advantageous to satisfy this programming requirement, this requirement is big to many actual application not this requirement simply cannot realize. Since this, at the same time the side effects, no matter how big, can only find ways to overcome, after all, demand is the driving force of technology survival. There are two biggest side effects of resource contention and execution of entity scheduling. Now that you want to run multiple lines of code in a CPU simulation that can only run a single code pipeline, you can only design the upper-level concept and then allocate the CPU resources by time, that is, the CPU is also known as the resource. There are many upper-level concepts of design: process, Thread, Workqueue, Tasklet, Softirq. This is also just the code pipelining concept that is still in use in Linux. In the current kernel, the process is the same as the thread, Workqueue, Tasklet, SOFTIRQ, and so on participate in scheduling algorithm scheduling, because the non-dispatch means that it cannot be executed by the CPU. These code flow concepts serve different purposes, such as SOFTIRQ and Tasklet, which are generally used for interrupts, and work queues are generally used for driving. The concepts of processes and threads are generally used for user space. And in the actual implementation, SOFTIRQ, Tasklet or work queue can be encapsulated to a thread, so scheduling algorithm in scheduling only need to know the thread of a structure, it is good to streamline the algorithm logic. But that's just the way it's implemented. I can get the scheduling algorithm to understand a variety of different code flow concepts, so that the scheduling of the difference between the treatment. We are now writing user-space code only to recognize processes and threads. That is because user space is the interface product of the kernel, user space programmer to the whole world is the knowledge that the kernel wants him to know, if the country and the people. For example, user space sees a process, sees threads, and a good programmer can significantly tell the difference between a process and a thread. But in the kernel space, the thread and the process is not much different, the user space to see the difference is actually forged out. As you see the movie is in the Han Dynasty, but not within the screen range, but the real is located next to the emperor isModern cameras and directors.   Since the concept of the process has finally won, we only know the strong, that is, in order to illustrate the modern process, it is necessary to explain the costs that must be paid for the process: process scheduling, resource competition, and how the concept of process is made. There is also one thing that cannot be said to be a price, but rather an integral part of the process concept, which requires separate discussion of process communication issues. Overview of Process scheduling

Linux is a multi-process environment, not only user space can have multiple processes, and the kernel can also have kernel processes inside. The threads in the Linux kernel are no different from the process, so they are called thread and process alike. The Scheduler schedules CPU resources to be assigned to specific processes according to specific rules. It then occupies the resources of the CPU resources to apply for or use hardware or resources. So there are a couple of questions that are involved:

For the Scheduler:

    • When the scheduler is running, how can I determine which program will be dispatched to use CPU resources?
    • How do you not let any one process starve?
    • How to locate and respond to interactive processes faster?
    • A single CPU has only one pipeline, but can I schedule more than one process at a time to use the physical resources of multiple CPUs?
    • How does the scheduled CPU let it release resources? Is it free or does it have a related recycling mechanism?

For a process that wants to be dispatched:

    • How do you define your own probability of being dispatched?
    • How can I receive a signal while waiting to be dispatched?
    • How do you avoid the resources that you want to occupy that are not being used by other processes when you are not scheduling them? Or does the SMP environment have no processes that use the same resources at the same time?

Scheduling policy

There are two kinds of time-sharing system and realtime system. Linux itself is not a real-time system, but in an inclusive principle, Linux also implements the interface of real-time systems.

For the whole kernel, the scheduling strategy includes: Sched_normal, Sched_fifo, SCHED_RR, sched_batch four kinds. The standard scheduling strategy also has two types of Linux that are not implemented: Sched_idle, Sched_deadline. Sched_normal is the default scheduling strategy for the time-sharing we call most often.

The sched_idle process will be executed without any non-sched_idle processes present. This level is typically used for operations such as disk grooming that do not affect the user's background time insensitivity. But the Linux kernel is not implemented.

The Sched_normal is fully fair and optimized for user interaction to optimize the prioritization of the two cases. In general, we often use to do dynamic priority adjustment for the user.

Whether real-time or normal, the priority is represented by a numeric value. Normal static priority is all 0, the difference between the normal scheduler can use dynamic priority. The program priority for real-time scheduling is 1-99, which means that any real-time program has a higher priority than the normal program.

When using SCHED_RR, the time slices, although there are also priority numbers, but even the highest priority of the process when the time slice is exhausted when the CPU will be released. In Sched_fifo, a process with the highest priority will never release the CPU unless it is actively freed (except for IO completion). Both are preempted when there is a higher priority process.

The previous said that the Sched_idle scheduling method is not implemented, then how to implement the Linux background disk collation and other operations? The answer is a function-like Sched_batch scheduling method. This scheduling method does not run completely when there is a normal program, but it guarantees the execution of the normal program and the response of the interactive program. It is also suitable for compiling operations such as GCC.

Configuration of the Process scheduling policy

You can set the scheduling method via the API provided by the kernel, or you can use the command line. The command is Chrt. You can also configure the maximum time consumption for real-time processes, because if a bug occurs in the real-time process, it is almost impossible for the highest priority process to release the CPU, causing the system to die. Parameters such as Kernel.sched_rt_period_us can be configured with the SYSCTL call to configure the maximum CPU usage of the real-time scheduling process.

With Cgroup and process scheduling, you can also configure how CPU resources are configured according to Cgroup. This is also done through the Cgroup file system.

Kernel infrastructure provided by the kernel process of the dispatch process

Many of the operations in the kernel are done using some kernel infrastructure. such as Workqueue, Tasklet, Softirq. These infrastructures can typically accomplish specific tasks. Since it is used to accomplish the task, it must be involved in scheduling. The dispatched units are only kernel threads. So while these mechanisms are some call-to-use interfaces for the user, their execution is performed through a specific kernel daemon thread.

Soft interrupts, Tasklet and Workqueue

Linux interrupts are divided into the upper and lower parts, the lower part can be off interrupt, resulting in the upper part of the interrupt task. The upper part does not require a shutdown interrupt and can be scheduled for execution. The reason for this is that the shutdown time in the system must be short or the response will be lost. The resulting soft interrupt is added to the execution queue of the kernel daemon thread ksoftirqd. This thread will then schedule the execution of the associated soft interrupt. Tasklet is similar to soft interrupts, except that in SMP systems, soft interrupts can be performed by multiple CPUs, are reentrant, and Tasklet only allow one CPU at a time, and are not reentrant. Users can decide whether to use Tasklet or SOFTIRQ, depending on whether a soft interrupt is allowed to re-enter.

Special, SOFTIRQ and Tasklet can't sleep, so you can't use semaphores or other blocking functions. Because they are all executed by one kernel thread (KSOFTIRQD), the system will not be able to respond to other soft interrupts if it is blocked. The work queue Workqueue itself is provided to the user as an available unit, and a workqueue is a kernel thread. Kernel modules can generate a workqueue and then add their own tasks into it. You can also add a task to the kernel by using the workqueue that it already has. Workqueue is a container in which kernel modules can add tasks to existing workqueue. The Workqueue is dispatched to perform its own sub-task. Can be said to be a process in progress.

Resource lock

The kernel's resource locks are: Spin lock, Semaphore, mutex, read/write lock rwlock, sequential lock, RCU lock, Futex lock.

These locks are used to solve different types of problems, respectively:

• Multiple CPUs concurrently accessing the same resource in a soft interrupt. Because the soft interrupt can not sleep, so in multiple CPUs to seize the unified resources can not use other locks, only busy and so on, this is the spin lock.

When a normal process competes for resources, the resource can only have one or several processes acquired at the same time, whether read or written. This is the mutex and the semaphore (the semaphore is 1 o'clock is the mutex)

L don't want to go into the kernel every time when mutual exclusion is not very frequent. There's a futex lock.

L The same resource wants to read and write separate processing. is read and write locks and sequential locks and RCU

Different locks serve different purposes and scenarios. In fact, Linux is only a part of the application of the idea of resource locking, operating system principle is a discipline, there are many ways to deal with the problem of resource locking.

Resource locks are essentially synchronous and mutex issues. As can be seen from the above, most of them are dealing with simultaneous writing problems. So as long as the operation that is guaranteed to compare and write is atomic, the thread can be unlocked. Intel has implemented similar instructions, such as Cmpxchg8, to perform comparisons and writes in a single cycle to ensure that no concurrent write collisions occur.

The same idea, Linux also provides two sets of atomic operations, one set for integers, and one for bits. A reasonable use of atomic operations can avoid most of the lock application scenarios. Spin locks look expensive, a runtime requires two of the CPU idling wait, but when the amount of code to lock is very small, because of the lightweight spin lock, is much smaller than the cost of using semaphores. Therefore, spin locks are not only used for soft interrupts, but also for locking a small piece of code.

In addition to the spin lock, there is a lock need to be busy, such as sequential lock. Strictly speaking this is not busy, but use a clever and very simple idea, read the lock value before reading, read the lock value after reading, if not change, it means that the reading process, read the value is not written, reread. When you write, you change the lock value. The principle is equivalent to a spin lock, but can allow multiple writes, read operations to read the correct value after the completion of multiple write operations.

But when it comes to locking large chunks of logic, the semaphore is needed for this heavyweight lock. However, general logic should try to avoid large locks. In reality, large locks can also be avoided by fine-grained design.

RCU Lock directly does not block write, the previous order lock is already an improved read-write lock, but also can only have a write. However, the RCU lock allows the write operation to be not blocked, and multiple writes are not written to the same place, but a copy of the new data is written. Read and continue to read the old, so that the use of memory increases for the cost of reading and writing are not blocked.

There is also a lock futex that is used only by user-space processes. Using this lock can completely replace the various locks of user space. Because of its high efficiency, behavior and meet the requirements. The principle of Futex is actually to consider the user state before using the semaphore and other locks are a variable in the kernel, each time the query to enter the kernel state, but also to come out. Fitex's idea is to directly map the kernel state of this lock variable mmap to the user process space, so that each user process can directly query the value in their own space without entering the kernel to know if there is anyone in use. Read although it is easy for everyone to read, but the write takes into account the possibility of multiple processes operating a variable, Linux is provided by the API into the kernel to lock write. Although the final is to fall into the kernel, but its judgment part can not enter the kernel to complete, and most of the situation to determine that the resources are not concurrent access. Except for special application scenarios.

The problem with semaphores is that if multiple CPUs get read locks, the semaphore itself is constantly refreshed in the cache of each CPU, resulting in a decrease in efficiency. The way to solve the kernel defines a new semaphore: Percpu-rw-semaphore.

Mutex and synchronization

The concept of mutual exclusion and the concept of synchronization must be differentiated. Mutual exclusion only one process at a time can access the resource, there is no timing concept, and the synchronization contains multiple accesses to the resources of the process's order of precedence, having you end up with the turn I mean. Mutual exclusion It's just that you're not finished and I can't start. Semaphores are synchronous in concept, because processes that do not get resources are sleep-waiting. Other kernel locks are mutually exclusive (spin, order), because they are blocked, or are always available (RCU).

SMP Lock and Preemptive lock

There are two kinds of resources being preempted: Concurrent access of multiple CPUs under SMP system and a preemptive access under a CPU. Most applications use the same locks to lock data when they are developed. However, these two situations have different characteristics, in many cases, a CPU can be a preemptive lock can do more lightweight.

Preempt_enable (), preempt_disable (), preempt_enable_no_resched (), Preempt_count (), preempt_check_resched () With these functions you can complete the lock operation in a single CPU, and no other kind of lock is required.

Priority lock

Futex is a good choice for the user to use the lock, however the user's process has a different priority, and the lock ignores all priorities, and the semaphore can implement the synchronization concept, but the lock does not. However, there are times when you want the lock to be prioritized on the process, which is the function provided by the Pi-futex lock, called priority inheritance. is implemented using a Futex lock, but the prioritization of the decision process is increased to determine the priority of the unlock. The efficiency will decrease significantly after opening the function.

The SMP processing of spin lock

When a spin lock a lot of processes in the spin wait, you can judge very busy in the spin lock. The way to judge the spin is to find that the owner of the spin lock has changed, but that it has not become himself. At this point, you should sleep instead of continuing the spin.

Lg_local_lock, Lg_global_lock

Multi-process (thread)

There is no difference between threads and processes by the Linux kernel, and if you are implementing a thread with a separate dispatch unit, you must use the process to correspond in the kernel. It is well known that in the kernel, the resources that each process can access are usually unknown to other processes, while the user state requires multithreaded programming to share the kernel, and the Linux kernel solves this problem by using a mechanism that allows a process to specify which resources can be shared with other processes when it is created. This simulation enables multithreaded environments. Newer kernels can not only share resources, but also use unshare system calls to cancel sharing, which means that the kernel makes it possible for the user to run the thread out of the process independently from the bottom.

Process resource Limits

There are a large class of requirements that restrict the resources available to the process. Can limit CPU, memory, file, behavior and so on. Even system calls.

System call Limit: Seccomp_filter

Restricts visible system calls to processes using the Seccomp_filter feature.

Manufacturing of process phenomena

We know that the process is the concept of being made. So how does Linux make this concept? Earlier, the solution of scheduling and resource competition. So what exactly is the dispatch? The reason for this arrangement is that everyone has been more or less aware of the process concept, so the nature of the discussion here is more about improving than opening the door.

For example, we write an airport flight scheduler, we arrange for a series of reasons (or run to resources, or weather reasons, or the above meaning) to schedule different planes to take off on different runs, we arrange the aircraft, then how do we express the aircraft in the program? That must be a struct (c + + can be a class). This is not difficult to understand the process scheduling algorithm scheduling is what, in fact, is also a structure, this structure is task_struct, a very large structure. As long as the scheduling algorithm is entered, when the scheduling algorithm runs, its output is a TASK_STRUCT structure (current macro). The CPU always executes the code location described by the output structure after the execution of the scheduling algorithm.

We know that any modern program execution must have the concept of a stack, the largest function of the stack is used to do function jump (in fact, is a function to achieve the purpose of the cost of products). The stack has the size, the organization structure, has the current position, has the definition good stack to stack the operation method. Haven't you asked who designed and implemented these properties and methods of the stack? The answer is naturally the kernel. When there is no process concept, only one stack is needed, which is the stack where the kernel code runs, and with the process concept, a separate stack is prepared for each process. And this work can only be done by the kernel itself.

In order to realize the process concept, it is more than the stack design and maintenance of this workload? How to effectively locate each task_struct, naturally rely on the number, so there is a PID concept. When the process is switched out of the CPU by the scheduling algorithm, what happens to the variables stored on the register when the process executes? The mechanism can only be designed to save and restore, so there is the concept of process context. How should a process be defined as an entity's relationship with other processes? Then there is the concept of the process family tree. How is the process created? How to end it? It also brings a lot of new concepts. The cost of all this is the core to compensate.

To be exact, the process of introducing the kernel is about how the kernel handles the introduction of process concepts in a series of costs. And as for the concept of the process itself, it is the same on all operating systems. Because it's just a conceptual model that exists in theory.

Thank you for watching brother Lian share

Introduction to Linux Kernel Engineering--process

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.