QNX operating system priority and scheduling policy

Source: Internet
Author: User
Tags ranges

I. Priority and task Preemption

Neutrino provides a priority-driven preemptible design concept. Priority-driven means that we can assign a priority to each thread, which can obtain CPU resources based on the priority scheduling policy. If a low-priority thread and a high-priority thread simultaneously obtain the CPU usage right, the high-priority thread will run. Preemptible means that if a low-priority thread is running and a high-priority running condition is met, it will obtain the CPU usage right.

The thread priority ranges from 1 to 255 (highest ). The priority of a common thread ranges from 1 to 63 (default ). The root user thread priority can be set on the 63 (procmgr_ability () interface. The system has an idle thread (in the Process Manager) with the lowest priority (0). This idle task is always in the ready state.

By default, the Child thread inherits the priority from the parent thread. A thread has two priorities, one being real and the other being valid. The system schedules tasks with a valid priority. A thread can modify two priorities by itself, but the valid priority may change because of System Scheduling Policies or priority inheritance. Under normal circumstances, the valid priority is equal to the actual priority.

The interrupt processor has a higher priority than any thread, but it is not scheduled like a thread. If an interruption occurs, then:

1. the currently running thread loses the CPU usage right and starts to interrupt Exception Handling (SMP issues)

2. Hardware run Kernel

3. the kernel calls the interrupt handler.


1.1 task Status Analysis

To fully understand how the scheduler works, you must first understand the status of the task during the running process and the principles of the ready queue.

Possible reasons for a task to change from running to blocking are:

1. Active thread sleep

2. The thread is waiting for messages from other threads.

3. The thread is waiting for the mutex lock.

When designing an application, we must consider that when a thread is waiting for something to happen, to ensure that the CPU is not idling. Some policies must be used to ensure that low-priority tasks can obtain CPU usage.

Various types of blocking are called blocking states. These blocking types include waiting for response blocking, waiting for message blocking, mutex lock blocking, interruption blocking, and sleep blocking.

When a thread wants to use the lake CPU, we call it ready, but it is suddenly interrupted by other tasks. This task will run and we call it "running.

1.2 ready queue Analysis

The ready queue is a simplified version of the kernel data structure, which is sorted by priority. Each queue node has other ready tasks with the same priority.


As shown in, the thread B-F is in the ready state. Thread a is running. The G-Z of other threads is blocked. Thread A/B/C has the highest priority, so they will share the processor to execute the task based on the scheduling policy.

The active thread is the only running thread. The kernel uses an array (each processor has an entry) to track the currently running threads.

Each thread is assigned a priority. The scheduler selects the highest priority task in the ready state to prepare for the next operation.

1.3 suspend a running thread

When the kernel calls the system, such as exceptions or hardware interruptions, the currently running thread is temporarily suspended. The scheduler determines when the execution status of any thread changes. The thread will be globally scheduled across all processes.

Normally, the executable program in the suspended state will be re-executed, but the scheduler will switch the context from one thread to another, when a running thread occurs:

1. Blocked

2. Preemption

3. voluntarily give up CPU usage

1.3.1 when a thread is blocked

When a thread is waiting for other things to happen, it will be blocked (required by IPC, waiting for mutex lock, etc ). The blocked thread is removed from the running array, and the task with the highest priority is run. When a blocked thread is programmed in a non-blocking state, it will be placed at the end of the corresponding Queue according to its priority.

1.3.2 when a thread is preemptible

When a high-priority thread is ready, the currently running thread will be preemptible. The preemptible thread is placed at the beginning of the corresponding Queue according to the priority, and the task with a higher priority starts to be executed.

The position of a preemptible thread in a queue of the same priority level will not change.

1.3.3 the thread voluntarily abandons the CPU

A running thread automatically waives the processor right (sched_yield (), which is placed at the end of the ready queue according to the priority level. The thread with the highest priority continues to run.

Ii. Scheduling Policy

To adapt to various applications, neutrino provides the following three scheduling policies:

1. FIFO
Scheduling Method

2. Cyclic Scheduling

3. adaptive scheduling

Every thread in the system can be scheduled in any way. The scheduling mode is based on each thread rather than all threads and processes on the same node. Remember, this scheduling policy applies only when two or more threads have the same priority in the ready state.

A thread can use pthread_attr_setschedparam () or pthread_attr_setschedpolicy () to set Scheduling parameters and policies, which are valid for all threads during creation.

Although the child thread inherits the priority of the parent thread, the thread can call pthread_setschedparam () to request the kernel scheduling policy and algorithm, or call pthread_setschedprio () to only modify the priority. A thread can call pthread_getschedparam () to obtain the current scheduling policy and algorithm, and call pthread_self () to obtain the thread ID. For example:

struct sched_param param;int policy, retcode;/* Get the scheduling parameters. */retcode = pthread_getschedparam( pthread_self(), &policy, &param);if (retcode != EOK) {    printf ("pthread_getschedparam: %s.\n", strerror (retcode));    return EXIT_FAILURE;}printf ("The assigned priority is %d, and the current priority is %d.\n",        param.sched_priority, param.sched_curpriority);/* Increase the priority. */param.sched_priority++;retcode = pthread_setschedparam( pthread_self(), policy, &param);if (retcode != EOK) {    printf ("pthread_setschedparam: %s.\n", strerror (retcode));    return EXIT_FAILURE;}
When you obtain the scheduler parametersSched_priorityThe parameter is used to modify the specified priority, while sched_curpriority modifies the current priority of the thread (this priority may be changed due to priority inheritance ).
1.1fifo scheduling policy
In the FIFO scheduling policy, a running thread will continue to run:
1. Voluntarily discard the CPU
2. Be preemptible by high-priority threads
1.2 circular scheduling policy
1. If a thread runs out of time slices, it will be interrupted and moved to the end of the queue.
1.3 adaptive scheduling policy
1. If a thread runs out of time slices, it will be interrupted, reducing the priority. (Only once ).

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.