Linux Scheduler-Deadline scheduler

Source: Internet
Author: User
Tags time interval

I. Overview

Real-time systems are a computing system: When an event occurs, it must respond within a defined timeframe. In real-time systems, producing the correct results depends not only on the correct logical action of the system, but also on the timing of the logical action. In other words, when the system receives a request, it makes a corresponding action in response to the request, wants to make sure that it responds correctly, on the one hand, the logical result is correct, and more importantly, it needs to respond within the deadline (deadline). If the system fails to respond within the deadline, then the system generates errors or defects. In multitasking operating systems (such as Linux), the real-time scheduler (Realtime Scheduler) coordinates real-time task-to-CPU access to ensure that all real-time tasks in the system are completed within their deadline.

If you abstract a real-time task, it requires three elements: period (period), run time (runtime), and deadline (deadline). The deadline scheduler takes advantage of this (meaning the perfect abstraction for real-time tasks), allowing the user to specify the specific requirements of the task, so that the system can make the best scheduling decisions, even in high-load systems to ensure the scheduling of real-time tasks.

Second, the real-time scheduler in Linux system

What is the difference between real-time and non-real-time tasks (or common tasks)? Real-time tasks have deadline, more than deadline, will not produce the correct logical results, non-real-time tasks do not have this limitation. To meet the scheduling needs of real-time tasks, Linux provides two real-time schedulers: POSIX Realtime Scheduler (later referred to as RT scheduler) and Deadline Scheduler (later referred to as DL Scheduler).

The RT scheduler has two scheduling strategies: FIFO (First-in-first-out) and RR (Round-robin). Both the FIFO and the RR,RT Scheduler are dispatched based on the real-time priority of the task (the Rt_priority member in the Linux process descriptor). The highest priority task will get CPU resources first. In real-time theory, this scheduler is classified as a fixed-priority scheduler (Fixed-priority Scheduler, where each RT task is assigned a fixed priority). FIFO and RR are not different when the real-time priority is different, so we can see the difference between FIFO and RR only when two tasks have the same priority. For a FIFO scheduler, the first task that enters the runnable state first acquires the CPU resources and consumes the resource until the process goes to sleep. For RR schedulers, tasks with the same priority share processor resources in a rotational manner. When a RR task starts running, if the task does not block, it will run until the time slice assigned to the task expires. When the time slice is used up, the scheduler will place the task at the end of the task list (note that only tasks of the same priority will be placed in a linked list, with different priorities in different linked lists), and the next task from the list of tasks is selected to execute.

Unlike the RT scheduler, the DL Scheduler is scheduled according to the deadline of the task (from the name also see out, haha). When a dispatch point is generated, the DL scheduler always selects the task that it deadline closest to the current point in time and dispatches it to execute. The scheduler is always scheduled according to the task's configuration parameters, and for the RT scheduler, the user needs to configure the task's scheduling policy (FIFO or RR) and that fixed real-time priority. For example:

chrt-f Video_processing_tool

With the above command, the Video_processing_tool task will be attributed to the RT scheduler management, in fact, the priority is 10, the scheduling policy is FIFO (-f parameter)

For the DL scheduler, the user needs to set three parameters: period (period), run time (runtime), and deadline (deadline). The cycle is related to the working mode of the real-time task. For example, for a video processing task, its main work is to process 60 frames per second of video data, that is, every 16 milliseconds to process each frame of video, so the task is the period of 16ms.

For real-time tasks, there is always a fixed "work" to do in a cycle, for example, in a video task, the so-called job is to process a frame of video data, runtime is to complete these "work" required CPU execution time, that is, in a cycle, the CPU needs to participate in the calculation of the time value. We cannot be too optimistic when setting the runtime parameters, and the runtime must consider the worst-case execution time (worst-case execution, WCET) at the time of Setup. For example, in video processing, each frame may not be quite the same (on the one hand, the correlation between frames is different, and even for a frame of data, the correlation between the image pixels is different), some will take longer, some will be shorter. If the longest frame of video takes 5 milliseconds to process, its runtime setting is five milliseconds.

Finally, let's say the deadline parameter. During the work cycle of a real-time task, deadline defines the deadline for processing the completed results to be delivered. We also take the above video processing task as an example, in the processing period of a video frame (16ms), if the task needs to pass the processed video frame to the next module within the first 10 milliseconds of the cycle, then the deadline parameter is 10 milliseconds. In order to achieve this requirement, it is clear that the first 10ms of the cycle must complete the processing of a frame of data, that is, the 5MS runtime must be in the cycle of the first 10ms time range.

With the Chrt command we can set the deadline scheduling parameters, for example: The above video task can be set:

chrt-d--sched-runtime 5000000--sched-deadline 10000000 \

--sched-period 16666666 0 Video_processing_tool

Where the "-D" parameter describes the scheduling policy set is deadline, "--sched-runtime 5000000" is set to run time parameter 5ms, "--sched-deadline 10000000" is set deadline to 10ms, "--sched-period 16666666" is the set period parameter. The "0" in the command line is a priority placeholder, and the DL scheduler does not use a priority parameter.

With the above setting, we can ensure that the DL Scheduler assigns the CPU uptime to the task 5ms during every 16ms cycle, and that this 5ms CPU time is guaranteed to be available to the task before deadline in 10ms. So that the task finishes processing and is delivered to the next task or software module.

Deadline's parameters seem complex, in fact simple, because as long as you know the behavior of the task, you can infer its scheduling parameters and set. This means that the deadline task's dispatch parameter is only related to itself, and is not system dependent. RT task is not, it needs to integrate the entire system, the appropriate RT priority is configured to the various RT tasks in the system to ensure that the entire system can function properly (that is, before deadline, the completion of the various RT task scheduling execution).

Since the deadline task explicitly informs the scheduler of its own requirements for CPU resources, when a new deadline task is created and enters the system, the DL scheduler can know whether the CPU can assume the newly created DL task. If the system is idle (the DL task is not many), then the task can enter the schedule, if the System DL task is already many, the newly added DL task has resulted in more than 100% CPU utilization, then the DL Scheduler will shut it out. Once the DL task is accepted, the DL scheduler can ensure that the DL task performs correctly as required by its dispatch parameters.

To further discuss the benefits of the DL scheduler, it is necessary to take a step back and look at the blueprint for real-time scheduling. Therefore, in the next section we will describe some of the theoretical knowledge of real-time scheduling.

Three, real-time scheduling overview

In the scheduling theory, how to evaluate the performance of the real-time scheduler? The way to do this is to create a set of real-time tasks (later called real-time task sets) that let the scheduler schedule execution to see if all the tasks in the task set are perfectly scheduled, that is, the time requirements (deadline) for all real-time tasks can be met. In order to be able to respond to a request within a determined time, the real-time task must complete certain actions within a determined point in time. To do this, we need to abstract the real-time task and summarize its task model to describe the deterministic timing of these actions.

Every real-time task consists of n repetitive "jobs", and if the work of an RT task always arrives at a fixed time interval, then we will be periodic (periodic) for that task. For example, an audio processor compresses one frame of audio data every 20ms. The task can also be sporadic (sporadic), sporadic task is similar to periodic task, but the cycle requirements are not so strict. For sporadic task, it defines only a minimum time interval. If this minimum time interval is 20ms, then the job may arrive at a distance of 20ms or 30ms, but it will not be less than 20ms. The last is a non-cyclical task, without any fixed pattern.

The last section summarizes the working mode of the real-time task, let's look at the classification of deadline. There are three types of deadline for a real-time task: The first is the implied deadline (implicit deadline), which is not explicitly defined deadline, whose value equals the period parameter. This type of real-time task is relatively low on time requirements, as long as the runtime's CPU resources are allocated within that period. The second is the restricted type deadline (constrained deadline), which is deadline less than (or equal to) the period parameter, a real-time task that has a higher requirement for timing and needs to allocate CPU resources within a time range before the end of the cycle. The last one is arbitrary deadline: deadline and cycle are not related.

Based on the abstract task pattern, the real-time researchers have developed a method to evaluate the scheduling algorithm: first given a set of tasks (including a variety of real-time task types described above), let the test scheduler to dispatch this set of tasks, to evaluate the scheduling ability of the scheduler. The results show that the scheduler using the early Deadline first (EDF) algorithm is considered to be the best in a single processor system. The implication is that when the scheduler is unable to complete a task set schedule, other schedulers are powerless. When scheduling periodic and sporadic tasks on a single-processor system, and the deadline is always less than or equal to the period parameter (that is, constrained deadline), the scheduler based on the deadline parameter performs well and performs optimally. In fact, for periodic or sporadic tasks whose deadline equals the period parameter (that is, implicit deadline), as long as the set of tasks being dispatched does not use more than 100% of the CPU time, Then the EDF scheduler can complete the dispatch normally, satisfies the deadline requirement of each RT task. The Linux DL Scheduler implements the EDF algorithm.

Let's give a practical example, assuming that there are three periodic tasks in the system, the parameters are as follows (deadline equals period):

Task Runtime (WCET) Period
T1 1 4
T2 2 6
T3 3 8

The utilization of these three tasks for CPU time has not reached 100%:cpu utilization = 1/4 + 2/6 + 3/8 = 23/24

For such a set of real-time tasks, the dispatch behavior of the EDF Scheduler is as follows:

Through the 3 RT tasks are well scheduled to meet the respective deadline requirements. What happens if you use a fixed-priority scheduler (such as a FIFO in the Linux kernel)? In fact, no matter how to adjust the priority of each RT task, can not be very good to meet the deadline requirements of each task, there will always be a task of the job after deadline completed, specific reference to the following image:

The biggest benefit of scheduling algorithms based on deadline is that once you know the scheduling parameters for each task in a real-time task set, you don't need to analyze other tasks at all, and you can tell if the real-time task set can be completed before deadline. In a single-processor system, the number of context switches generated by deadline-based scheduling tends to be relatively small. In addition, the scheduling algorithm based on Deadline can dispatch more tasks than fixed-priority scheduling algorithm under the condition that each task satisfies its deadline requirements. Of course, the scheduler based on the deadline parameter (hereafter referred to as the deadline scheduler) also has some drawbacks.

Although the deadline scheduler ensures that each RT task is completed before deadline, it does not guarantee a minimum response time for each task. For those scheduling scheduler based on fixed priority (hereinafter referred to as the priority scheduler), the task with high priorities always has the minimum response delay time. The priority scheduling algorithm of EDF scheduling algorithm is more complicated. The complexity of the priority scheduling algorithm can be O (1) (for example, the RT scheduler in Linux), in contrast to the complexity of the deadline Scheduler is O (log (n)) (for example, the DL Scheduler in Linux). However, the priority scheduler needs to select one of the most appropriate priorities for each task, and the computational process of the optimal priority may be offline, and the complexity of the algorithm is O (n!).

If the system is overloaded for some reason, for example, because new tasks are added or incorrectly estimated WCET, the deadline schedule can have a domino effect: When a task has a problem, it is not just the task that is affected, and the problem spreads to other tasks in the system. We consider this scenario, because the runtime exceeds the time specified by its runtime parameter, the scheduler completes the job after deadline and delivers it to other tasks, which issue affect all other tasks in the system, causing other tasks to miss Deadline, As shown in the following areas of red:

For those scheduling algorithms that are based on fixed priorities, instead, when a task goes wrong, the only task that has the lowest priority is affected. (Incidentally: In Linux, the DL Scheduler implements the CBS, which solves the domino effect, which is detailed in the next document.) )

In a single-core system, the scheduler only needs to consider the problem of task execution sequencing, in the multi-core system, in addition to the task of succession problems, the scheduler also need to consider the CPU allocation problem. In other words, in a multicore system, the scheduler also needs to decide which task to run on that CPU. In general, the scheduler can be divided into the following categories:

(1) Global class: A scheduler can manage all CPUs in the system, and tasks can be freely migrated between CPUs.

(2) Cluster Class (Clustered): The CPU in the system is divided into disjoint several cluster, the scheduler is responsible for dispatching tasks to the CPU in the cluster.

(3) partition class (partitioned): Each scheduler, despite its own CPU, the system has the number of CPUs there are the number of dispatcher entities.

(4) Any class (arbitrary): Each task can be run on any CPU set.

For partitioned deadline scheduler, the scheduling in multi-core system is strictly decomposed into single-core deadline scheduling process. In other words, the performance of the partitioned deadline Scheduler is optimal. However, the global, clustered, and arbitrary deadline schedulers in multicore systems are not optimal. For example, in a system with M processors, if a real-time task with M runtime equals the period parameter needs to be dispatched, the scheduler is easy to handle and each CPU can handle a task. We can further materialize, assuming that there are four "big jobs", runtime and period are 1000ms, a system with four processors can perform the four "big live", in such a scenario, the CPU utilization is 400%:

4 * 1000/1000 = 4

The results of the dispatch are as follows:

Under such heavy load, the scheduler can work, each "big live" deadline are satisfied. When the load of the system is relatively light, we intuitively assume that the scheduler should be able to hold the scene. Below we construct a light load: The Scheduler to face is 4 "small live" and a "big live", "small Live" runtime is 1ms, Cycle is 999ms, "Big live" ibid. In this scenario, the system's CPU utilization is 100.4%:

4 * (1/999) + 1000/1000 = 1.004

1.004 is far less than 4, so we intuitively feel that the scheduler is able to dispatch this "4 small One big" scheduling scenario. In real time, however, the EDF scheduler, which behaves optimally on a single core, has problems in multicore systems (referred to as the global EDF Scheduler). The reason is this: if All tasks are released at the same time, 4 small jobs (deadline earlier) will be dispatched on 4 CPUs, this time, "big live" only after "small live" run completed only after the execution, so "big live" deadline can not be satisfied. As shown in. This is known as the Dhall effect (Dhall ' s effect).

Assigning several tasks to a number of processor executions is actually a np-hard problem (essentially a boxing problem) and it is difficult to say that a scheduling algorithm is superior to any other algorithm due to various anomaly scenarios. With this background knowledge, we can further analyze the details of the DL scheduler in the Linux kernel to see how it avoids potential problems and plays its powerful scheduling power. For more information, listen to tell.

Linux Scheduler-Deadline scheduler

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.