Linux kernel design and implementation details

Source: Internet
Author: User

4.3.4activity of the scheduling policy

Imagine a system like this that has two running processes: a text-editing program and a video-encoding program. The text-editing program is obviously I/O-consumable, because it waits most of the time for the user's keyboard input (no matter how fast the user's input speed is, it is impossible to catch up with the processing speed). Users always want to press the key system to respond immediately. Instead, the video encoding program is processor-consumable. In addition to the initial reading of the original data stream from disk and the last processed video output, the program all the time double is used to encode the original data, the processor is easily used by 100%. It's not too strict when it starts to run--the user can hardly tell or care whether it starts immediately or half a second later. Of course, the sooner it finishes, the better it is, and the time it takes is not the main concern.

In such a scenario, the ideal scenario is that the scheduler should give the text editor more processor time than the video encoding program because it belongs to the interactive application. For a text editor, we have two goals. The first is that we want the system to give it more processor time, not because it requires more processor time (which is not needed) because we want to always get the processor when it is needed, and the second is that we want the text editor to preempt the video encoding program when it wakes up (that is, when the user is typing). This ensures that the text editor has good interaction performance in order to respond to user input. In most operating systems, this goal is achieved by relying on the system to assign a higher priority and more time slices to the text editor than the video decoder program. Advanced operating systems can automatically discover that a text editor is an interactive program that automatically accomplishes the above assigned actions. The Linux operating system also needs to pursue these goals , but it uses a different approach. It no longer assigns a given priority and time slice to a text editor, but allocates a given processor usage ratio. If the text editor and the video decoder are the only two running processes and have the same nice value, then the processor will be used more than 50%--they split the processor time. But because the text editor will spend more time waiting for user input, it certainly won't use 50% of the processor. At the same time, the video decoder program will undoubtedly have the opportunity to use more than 50% of the processor time, so that it can complete the decoding task more quickly.

The key question here is: what happens when a text editor program is awakened. Our primary goal is to ensure that it runs as soon as the user input occurs. In the above scenario, once the text editor is awakened, CFS notices that the processor used for it is 50%, but in fact it uses less. In particular, the CFS finds that the text editor runs much less time than the video decoder. In this case, in order to fulfill the promise of allowing all processes to share the processor fairly, he immediately preempted the video decoder and put the text editor into operation. After the text editor is run, the user's keystroke input is processed immediately, and once again into sleep waits for the user to enter the next time. Because the text editor does not consume the 50% processor that is committed to it, the situation remains the same, and CFS will never hesitate to let the text editor run when needed, leaving the video handler to run only for the rest of the time.

4.4.2 Unixprocess scheduling in the system

First, to map a nice value to a time slice, you will necessarily need to correspond the Nice unit value to the absolute time of the processor. However, doing so will result in the process switching not being optimized. For example, suppose we assign the default nice value (0) to a process-the corresponding time slice is 100ms, and a maximum nice value (+20, lowest priority) is assigned to another process-the corresponding time slice is 5ms. We then assume that both processes are in a operational state. Then the default priority process will get processor time for 20/21 (100ms in 105ms), while the low-priority process will get 1/21 (105 of 5) processing time. We could have chosen any numeric value for this example, but this allocation is exactly the most persuasive, so we chose it. Now, let's see what happens if you run two of the same low-priority processes. We want them to get half of their processor time, in fact it does. But any process can only get 5ms of processor time (half of 10). That is, there is a context switch within 105MS compared to the previous example, and now you need to continue with two context switches within 10ms. By analogy, if there are two processes with a normal priority, they will also get 50% processor time each, but they get half of each within 100ms. Obviously, we see that these time slices are not very well distributed: they are the result of a combination of a nice value to a time slice map and a process-run priority. In fact, a process with a given high nice value (low priority) is often a background process and mostly computationally intensive, while the normal priority process is more of a foreground user task. So this time-slice distribution is clearly suited to the original purpose of the contrary.

The second question concerns the relative nice value, and it is also irrelevant to the previous friendly value to the time slice mapping. Let's say we have two processes, each with a different priority. The first assumes that the nice value is only 0, and the first assumption is 1. They will be mapped separately to time slices 100ms and 95ms (O (1) The scheduling algorithm does so). Their time slices are almost the same, and their differences are negligible. But if our process gives nice values of 18 and 19, respectively, they are mapped to 10ms and 5ms time slices, respectively. If so, the former gets twice times more processor time than the latter. But nice values usually use relative values (Nice system calls are incremented or decremented on the original value instead of operating on absolute values), which means that the effect of "reducing the nice value of a process by 1" greatly depends on the initial value of its nice.

The third problem is that if you perform a nice value to a time slice mapping, we need to be able to allocate an absolute time slice, and this absolute time slice must be within the test range of the kernel. In most operating systems, this requirement means that the time slice must be an integer multiple of the timer beat. But doing so will inevitably lead to several problems. First, the minimum time slice must be an integer multiple of the timer beat, which is the multiples of 10ms or 1ms. Second, the system timer limits the difference between two time slices: a continuous nice value maps to a time slice with a range of up to 10ms or less 1ms. Finally, the time slice will also change with the timer beat.

Question Fourth (the last one on the priority-based scheduler that wakes up related processes in order to optimize interactive tasks). In such a system, you may be able to run faster for processes, while the process of awakening new drugs is prioritized, even if their time slices are exhausted. While the above approach does improve many interactions, some exceptions can occur because it also gives some special sleep/wake-up use cases a backdoor that plays with the scheduler, allowing a given process to break the fairness principle, gain more processor time, and damage the benefits of other processes in the system.

Linux kernel design and implementation details

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.