[Linux] process scheduling overview, linux scheduling Overview

Source: Internet
Author: User

[Linux] process scheduling overview, linux scheduling Overview

 

 

1. executable queue

 

(Based on Real-time process scheduling)

Runqueue ). An executable queue is a linked list of executable processes on a given processor. Each processor has one. Each running process is unique to an executable queue. In addition, the executable queue also contains the scheduling information of each processor. Therefore, the executable queue is also the most important data structure of each processor.

To avoid deadlocks, the code that locks multiple running queues must always obtain these locks in the same order: in the ascending order of executable queue addresses.

Note: A deadlock occurs when two or more processes compete for resources or communicate with each other. If there is no external force, they will not be able to proceed. It is said that the system is in a deadlock state or the system has a deadlock. These processes that are always waiting for each other are called deadlock processes.

Runqueue is a two-way cyclic queue. Once the scheduling time is triggered, the kernel re-calculates the running weights of all processes in the current queue and selects the process with the highest weights as the current process for running. Disadvantages:

1) when the scheduling time is triggered, re-calculate the running weight of each process in runqueue. The complexity is O (n), and the scheduling performance is related to the kernel load.

2) runqueue manages both real-time and non-real-time processes (common processes) at the same time. The kernel uses process attributes, the execution weight count is calculated based on actual or non-real-time, real-time process priority, user process, or kernel thread-related factors, which is flexible and not easy to understand and maintain.

 

 

 

2 priority Array

Each running queue has two priority arrays, an active and an expired struct of the prio_array type. A priority array is a data structure that provides the O (1) Level Algorithm for generating clutter. The priority array enables each type of priority of the processor to contain a corresponding queue, which contains the list of executable processes with the corresponding priority.

Each priority array also contains a queue called struct list_head. Each linked list corresponds to a given priority, and each linked list contains all the processes that can run with the corresponding priority on the processor queue.

 

3. recalculate the time slice

When the time slice of all processes (the time when the CPU is allocated to each program) is used up, an explicit method is used to recalculate the time slice of each process.

The new Linux scheduler reduces the dependency on loops. Instead, it maintains two priority Arrays for each processor: both active arrays and expired arrays. The processes in the executable queue in the activity array still have time slice. The processes in the executable queue in the expired array consume time slice. When a process's time slice is exhausted, it will be moved to the expired array, but before that, the time slice has been re-computed for it.

 

4 schedule ()

The next process is selected and switched to it for execution through the schedule () function. When the kernel code is required to sleep, the function is called directly. In addition, if any process is preemptible, the function is also called for execution. The schedule () function runs independently of each processor.

 

5 Calculate the priority and time slice

The nice value is named static priority because it cannot be changed from the very beginning after it is specified by the user. Dynamic Priority is calculated based on a functional relationship between static priority and process interaction. The effective_prio () function returns the Dynamic Priority of a process. This function is based on the nice value, plus the reward or penalty score for process interaction between-5 and + 5.

We can infer whether the process is I/O-consuming or processor-consuming. The most obvious criterion is the duration of process sleep. If a process is sleeping for most of the time, it is I/O consumable. If a process takes longer time than sleep, it is a consumable processor.

On the other hand, it is relatively easy to recalculate the time slice. It only needs to be based on the static priority. When a process is created, the newly created sub-process and parent process are equally divided into the remaining process time slices of the parent process. This allocation is fair and prevents users from continuously obtaining time slices by creating new processes. The task_timeslice () function returns a new time slice for a given task. To calculate a time slice, you only need to scale the priority in proportion so that it meets the time slice's Numerical range requirements.

The scheduler also provides another mechanism to support interactive processes: if a process is highly interactive, when its time slice is used up, it will be placed in the activity array instead of the expired array.

 

6. Sleep and Wakeup

A sleep (BLOCKED) process is in a special unexecutable state. The process marks itself as a sleep state, removes itself from the executable queue, puts it in the waiting queue, and then calls schedule () to select and execute another process. The wake-up process is the opposite: the process is set to an executable state, and then moved from the waiting queue to the executable queue.

There are two statuses related to hibernation: TASK_INTERRUPTIBLE and TASK_UNINTERRUPTIBLE. Sleep is processed by waiting for the queue. A waiting queue is a simple linked list composed of processes waiting for certain events. The kernel uses wake_queue_head_t to represent the waiting queue. The waiting queue can be created through DECLARE_WAITQUEUE () or dynamically by init_waitqueue_head. The wake-up operation is performed by the function wake_up (), which will wake up all processes in the specified waiting queue. Note that sleep has a false wake-up function. Sometimes a process is awakened not because the conditions it waits for are fulfilled, so a loop processing is required to ensure that the conditions it waits for are fulfilled.

 

7. Load Balancing Program

The load balancing program is implemented by using the load_balance () function in kernel/sched. c. It has two call methods. When schedule () is executed, it is called as long as the current executable queue is empty. In addition, it will also be called by the Timer: The call is made every 1 millisecond when the system is idle or in other cases every 200 milliseconds. When the load balancing program is called, the executable queue of the current processor needs to be locked and the interruption is blocked to avoid concurrent access to the executable queue.

 


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.