Concept of real-time operating system

Source: Internet
Author: User
Tags define local semaphore
Concept of real-time operating system

A real-time system is a system with serious consequences if there is a deviation between logic and timing. It can quickly process and respond to external events and data.

There are two types of real-time systems: Soft Real-Time Systems and hard real-time systems.

1. In a soft real-time system, the purpose of the system is to make each task run faster and better, and does not require a limit on how long a task must be completed.

2. In a hard real-time system, each task must be executed on time.

Most real-time systems combine the two. Most real-time systems are embedded.

This means that the computer is built inside the system, and the user cannot see a computer in the system-embedded.

Frontend and backend systems

ApplicationProgramIt is an infinite loop, and the corresponding function is called in the loop to complete the corresponding operation, this part can be seen as the background behavior (Background).

The interrupt service program processes asynchronous events, which can be viewed as foreground behavior (Foreground).

The background can also be called a task level. The foreground is also called an interrupt level.

Critical operation must be guaranteed by service interruption.

Because the information provided by the Service Interruption can be processed only when the Background Program reaches this step, the system can process the information in a timely manner, which is worse than what actually can be done.

 

 

Non-preemptive Kernel)

The irretrievable kernel requires each task to give up its ownership of the CPU. The non-deprivation scheduling method is also called Cooperative Multi-task. Each task shares a CPU together.

Asynchronous events are still handled by interrupted services. Service Interruption can change a high-priority task from pending to ready.

However, after the service is interrupted, the control is still back to the previously interrupted task, until the task voluntarily gives up the right to use the CPU, the high-priority task can get the right to use the CPU.

Running tasks occupy the CPU, so you do not have to worry about being preemptible by other tasks.


Task-level response time is much better than that of the front and back systems, but it is still unknown to wait until the current task is completed, and commercial software has almost no non-essential kernel.

 

3. detachable Kernel

When the system response time is important, use the detachable kernel. Once a task with the highest priority is ready, the CPU is always under control.

When a running task enters the ready state for a task with a higher priority, the CPU usage of the current task is denied,

Or if the task is suspended, the high-priority task immediately gets control of the CPU.

If the interrupt service subprogram enables a high-priority task to enter the ready state, when the interrupted task is completed, the interrupted task is suspended and the task with the highest priority starts to run.

 

When a task with the highest priority can be executed using a detachable kernel, the probability of switching between tasks is high, and resource competition occurs.

IV Basic concepts of Real-time Operating Systems 1CodeCritical Section

The critical section of a code is also called a critical section. It refers to the code that cannot be separated during processing.

Once the code is executed, no interruption is allowed. To ensure the execution of the critical code segment, stop the interruption before entering the critical code segment, and enable the interruption immediately after the critical code segment is executed.

2. Task

A task, also known as a thread, is a simple program that can assume that the CPU belongs to the program itself.

The design process of real-time applications includes how to divide the problem into multiple tasks. Each task is a part of the entire application,

Each task is given a certain priority and has its own set of CPU registers and stack space.

 

Typically, each task is an infinite loop.

 

Sleep stateIt is equivalent that the task resides in the memory but is not scheduled by the multi-task kernel.

ReadinessThis means that the task is ready and can be run. However, because the priority of the task is lower than that of the running task, the task cannot be run yet.

Running StateIs a task that has control over the CPU and is running.

Pending statusIt can also be called waiting for event state waiting, which means that the task is waiting for an event to happen,

(For example, waiting for I/O operations on a peripheral, waiting for a shared resource to change from unavailable to available, wait for the arrival of the scheduled pulse or wait for the arrival of the timeout signal to end the current wait, and so on ).

InterruptedThe CPU provides the corresponding interrupt service. If a previously running task cannot be run, it enters the interrupted status.

 

3. Task Switching

When the multi-task kernel decides to run another task, it stores the current state (context) of the running task, that is, all content in the CPU register.

The content is stored in the current storage area of the task, that is, the stack area of the task.

After the stack entry is complete, reload the current status of the next task from the stack of the task to the CPU register and start running the next task. This process is called task switching.

The task switching process adds additional load to the application. The more internal registers the CPU has, the heavier the extra load. The time required for Task Switching depends on the number of registers on the CPU that need to be written into the stack.

4 Kernel

In a multi-task system, the kernel is responsible for managing tasks, or allocating CPU time for each task, and communicating between tasks. The basic service provided by the kernel is task switching.

The use of the Real-Time Kernel greatly simplifies the design of the application system because the real-time kernel allows applications to be divided into several tasks and managed by the real-time kernel.

The kernel itself also increases the application load, the code space increases the ROM usage, and the data structure of the kernel itself increases the RAM usage.

But what's more important is that each task needs to have its own stack space, and this memory is quite powerful. The CPU usage of the kernel is generally between 2 to 5 percentage points.

Single-chip microcomputer generally cannot run the real-time kernel, because the single-chip microcomputer Ram is very limited.

By providing essential system services, such as semaphore management, mailbox, message queue, and latency, the real-time kernel makes CPU utilization more effective.

5 Scheduling

Schedcher. This is one of the main responsibilities of the kernel, that is, to decide which task to run.

Most Real-Time kernels are based onPriority Scheduling Method. Each task is given a certain priority based on its importance.

The priority-based scheduling method means that the CPU always enables tasks with the highest priority in the ready state to run first. However, when can a high-priority task master the CPU usage right,

There are two different situations, depending on what type of kernel is used, whether it is an unretrievable or an unretrievable kernel.

 

6 reentrancy)

A reentrant function can be called by more than one task without worrying about data destruction. The reentrant function can be interrupted at any time and run again after a period of time without any data loss.

The reentrant function can use only local variables, that is, the variables are stored in the CPU register or stack. If global variables are used, global variables must be protected.

Non-reentrant functions use public resources between tasks, such as global variables.

Suppose you are using a detachable kernel, and the interrupt is on. Temp is defined as an integer full variable.

Int temp; // global variable

Void swap (int * X, int * Y)

{

Temp = * X;

* X = * Y;

* Y = temp;

}

 

This will happen:

 

Use one of the following techniques to make the swap () function reentrant:

Define temp as a local variable

Switch off before the swap () function is called, and interrupt after the transfer

Use semaphores to disable the function from being called again during use

7. task priority

Each task has its priority. The more important the task is, the higher the priority should be given.

Static Priority

When the application is executed, the priority of all tasks remains unchanged, which is called static priority. In the static priority system, the tasks and their time constraints are known during program compilation.

Dynamic Priority

During application execution, the task priority is variable, which is called dynamic priority. The real-time kernel should avoid priority inversion.

Time slice Scheduling

When two or more tasks have the same priority, the kernel allows a task to run for a specified period of time, which is called the time limit (Quantum), And then switch to another task. It is also called time slice scheduling.

 

Priority inversion

Priority Inheritance

8. mutex Conditions

When all tasks are in a single address space, full variables, pointers, buffers, linked lists, and circular buffers can be used, it is easier to communicate using a shared data structure.

Although the shared data partition method simplifies information exchange between tasks, it is necessary to ensure that each task schedules the shared data to avoid competition and data destruction. Make it meet mutex conditions when dealing with shared resources

The most common methods are:

Guanzhong disconnection

Run the test reset bit command: Event

Task Switching prohibited

Use semaphores:Multi-value semaphores, mutex semaphores

 

Semaphores can be used to synchronize tasks or between tasks and interruptions.

9. Communication between tasks

Communication between tasks or between interrupted services and tasks. There are two ways to transfer information between tasks: to send messages to another task through full variables or messages.

Full variables:

The task can only communicate with the interrupt service program through full variables. The task does not know when the entire process will change.

The volume is modified by the interrupted service program, unless the interrupt program sends a signal to the task in semaphore mode or the task continuously queries the variable value periodically.

Message Email:

To avoid this situation, you can consider using a mailbox or message queue.

Through the kernel service, a task or an interrupted service program can put a message (that is, a pointer) in the mailbox. One or more tasks can receive this message through the kernel service.

 

Message Queue:

The message queue is actually a mailbox array.

 

10 interruptions

Interrupt is a hardware mechanism that notifies the cpu Of an asynchronous event. Once an interrupt is identified, the CPU saves partial (or all) context values, that is, the values of some or all registers,

Jump to a special subroutine called the interrupt service subroutine (ISR ). The interrupt service subroutine is used for event processing. After processing, the program returns:

In the front and back-end system, the program returns to the back-end program

For an irretrievable kernel, the program returns to the interrupted task.

Enables the task with the highest priority in the ready state to start running for the deprived kernel.

 

11 Storage

Total code size = application code + kernel code

Because each task runs independently,Each task must be provided with a separate stack space.(RAM). When the Application Designer decides how much stack space is allocated to each task,

Make it as close as possible to the actual demand (sometimes this is quite difficult ).

Size of stack spaceNot only do you need to calculate the requirements of the task itself (local variables, function calls, etc.), but you also need to calculate the maximum number of Interrupt nested layers (save registers, local variables in the interrupt service program, etc ).

Depending on the type of the target microprocessor and kernel,Task StackAndSystem StackIt can be separated.

The system stack is used to process interrupt-level code. This has many advantages, and the stack space required by each task can be greatly reduced.

Another performance that the kernel should have is that the size of stack space required for each task can be defined separately (μC/OS-II can do ).

On the contrary, some kernels require the same stack space for each task. All kernels require additional stack space to ensure internal variables, data structures, queues, etc.

 

Unless there is a particularly large Ram space that can be used, be very careful when allocating and using stack space. To reduce the ram space required by applications, you must be very careful when using each task stack space,Pay special attention to the following points:

    1. Define local variables in functions and interrupt service subprograms, especially large arrays and data structures.
    2. Nesting of functions (subprograms)
    3. Interrupt nesting
    4. Stack space required by library functions
    5. Variable element function calls

To sum up, the multi-task system requires more code space (ROM) and data space (RAM) than the front and back-end systems ).

The extra code space depends on the size of the kernel, and the usage of Ram depends on the number of tasks in the system.

 

12 advantages and disadvantages of using real-time kernel

The real-time kernel is also called a real-time operating system or RTOS. It makes it easy to design and expand real-time applications,

New functions can be added without major changes. By dividing an application into several independent tasks, RTOS greatly reduces the design process of the application.

All time-demanding events are processed as quickly and effectively as possible when a detachable kernel is used. Through effective services,

Such as semaphores, mailboxes, queues, latencies, and timeouts. RTOS makes better use of resources.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.