Linux Process scheduling policy

Source: Internet
Author: User

three main scheduling strategies for the Linux kernel :

1,sched_other time-sharing scheduling strategy,

2,sched_fifo Real-time scheduling strategy (first-come-first service)
3,SCHED_RR Real Time scheduling strategy (time slice rotation)

Real -time processes are prioritized, and real-time processes determine the scheduling weights based on real-time prioritization.

the time -sharing process is determined by the nice and counter values, nice smaller, counter, the greater the probability of being dispatched, that is, used the process with the least CPU will be prioritized.

Any execution time exceeding 10ms or the first two execution time exceeding 10ms will result in deviation from the running result because the CPU is scheduled to be 10ms

The linuxthreads only supports scheduling with Pthread_scope_system scheduling, and the default scheduling policy is Sched_other.

The user thread scheduling policy can also be modified to Sched_fifo or SCHED_RR, both of which support priority 0-99, while Sched_other supports only 0.

Sched_other is a normal process, the latter two are real-time processes (general processes are ordinary processes, and the chances of real-time processes appearing in the system are rare). Sched_fifo, SCHED_RR priority is higher than all sched_other processes, so as long as they are able to run, all sched_other processes will not be executed until they run out.

Report:

different points of Shced_rr and Sched_fifo:

when the time slice of the process using the SHCED_RR policy is exhausted, the system will redistribute the time slices and place them at the end of the ready queue. Placing at the end of the queue ensures that all RR tasks with the same priority are scheduled fairly.      

Sched_fifo is always running once the CPU is occupied. Run until a higher priority task arrives or abandons itself.

if a real-time process with the same priority (which is the same as the scheduling weights computed by the priority) is ready, theFIFO must wait for the process to be actively discarded before it can run the same priority task. The RR allows each task to execute for a period of time.

Same point:

both RR and FIFO are used for real-time tasks only.

The creation priority is greater than 0 (1-99).

according to the preemptive priority scheduling algorithm.

real-time tasks of ready state immediately preempt non-real-time tasks.

When all tasks are using the Linux time-sharing scheduling strategy:

1, create a task to specify a time-sharing scheduling policy and specify a priority nice value ( -20~19).

2, the execution time on the CPU is determined based on the nice value of each task (counter).

3, if no resources are waiting, the task is added to the ready queue.

4, the scheduler iterates through the tasks in the ready queue, by calculating the weighted value (counter+20-nice) result of the dynamic priority of each task, selects one of the most calculated results to run, when this time slice is exhausted (counter is reduced to 0) or when the CPU is actively discarded , the task will be placed at the end of the ready queue (the time slice runs out ) or wait for the queue (the CPU is discarded as a result of waiting for the resource ) .

5, this time the dispatcher repeats the above calculation process, and goes to step 4.

6, repeat Step 2 when the scheduler finds that all the ready tasks calculate a weight of less than 0 .

FIFO is used for all Tasks When:

1, the process is created by specifyinga FIFO and setting the real-time priority Rt_priority (1-99).

2, if no resources are waiting, the task is added to the ready queue.

3, scheduler traverses the Ready queue, calculates schedule weights (1000+rt_priority), select uses cpu Fifo Task will always occupy cpu until there is a higher priority task ready ( Even if the priority is the same ) ()

4, the scheduler discovers that a higher priority task arrives (the high-priority task may be interrupted or the timer task wakes up, or is awakened by the currently running task, and so on ), theScheduler immediately saves the current CPU in the current task stack Register data to the CPU from the stack of the high-priority task, and the high- priority task begins to run. Repeat Step 3 .

5, if the current task is actively abandoning the CPU usage due to waiting for the resource, the task will be removed from the ready queue and queued for the wait, repeating step 3 at this time .

When all tasks are using RR scheduling policy:

1, specify the schedule parameter as RR when creating the task, and set the task's real-time priority and nice value (the nice value will be converted to the length of the task's time slice ).

2, if no resources are waiting, the task is added to the ready queue.

3, the scheduler traverses the ready queue, calculates the scheduling weights (1000+rt_priority) based on the real-time priority , and selects the task with the highest weights to use the CPU.

4, if the RR task time slice in the ready queue is 0, the task's time slice is set according to the Nice value, and the task is placed at the end of the ready queue. Repeat step 3.

5, the current task exits the CPU by waiting for the resource , then it joins the waiting queue. Repeat step 3.

in the system, there are time-sharing scheduling and temporal-slice rotation scheduling and FIFO scheduling:

1,RR scheduling and FIFO scheduling process belongs to real-time process, the process of scheduling is non-real-time process.

2, when the real-time process is ready, if the current CPU is running a non-real-time process, the real-time process immediately preempt the non-real-time process.

3,RR process and FIFO process adopt real-time priority as the weighted standard of scheduling,RR is an extension of FIFO. FIFO , if two processes have the same priority, then which of the two priority processes executes is determined by their unknown in the queue, which leads to some unfairness ( the priority is the same, why do you keep running ?), If you set the scheduling policy for the two priority tasks to RR, then the two tasks can be executed in a loop, guaranteeing fairness.

Ingo molnar-Real-time patch

In order to incorporate the mainstream kernel, the Ingo Molnar's real-time Patch also employs a very flexible strategy that supports four preemption modes: 1. No forced preemption (server), which is equivalent to a standard kernel that does not have the Enable preemption option, is primarily suitable for server environments such as scientific computing. 2. Voluntary Kernel preemption (desktop), this mode enables a voluntary preemption, but still fails to preempt the kernel option, which reduces preemption latency by increasing the preemption point, so it is suitable for some environments that need better responsiveness, such as desktop environments, Of course, this good responsiveness is at the expense of some throughput rates. 3. Preemptible Kernel (low-latency Desktop), this mode includes both voluntary preemption and the ability to preempt kernel options, so there is a good response delay, in fact, to a certain extent, has reached the soft real-time performance. It is mainly used for desktops and some embedded systems, but the throughput rate is lower than mode 2. 4. Complete preemption (real-time), which enables all real-time functions, is fully capable of meeting soft real-time requirements and is suitable for real-time systems with latency requirements of 100 microseconds or lower. The realization of real-time is at the expense of the throughput rate of the system, so the better the real-time, the lower the system throughput rate.

Linux Process scheduling policy

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.