Comparison between real-time operating systems and general operating systems

Source: Internet
Author: User

An embedded real-time operating system is called an embedded real-time operating system. It is both an embedded operating system and a real-time operating system. As an embedded operating system, it has the characteristics of common embedded software, such as cropping, low resource usage, and low power consumption; as a real-time operating system (this article discusses the features of real-time operating systems only for strong real-time operating systems. The real-time operating systems mentioned below also refer to strong real-time operating systems ), compared with general operating systems (such as Windows, UNIX, and Linux, next we will gradually describe the main features of the real-time operating system by comparing the differences between the two operating systems.


In our daily work and learning environment, we are most exposed to general operating systems. General operating systems are developed from time-sharing operating systems, most of which support multiple users and multi-process, manages numerous processes and allocates system resources to them. The basic design principle of a time-based operating system is to minimize the average response time of the system and increase the throughput of the system, so as to provide services for as many user requests as possible per unit time. From this we can see that the time-sharing operating system focuses on average performance rather than individual performance. For example, for the entire system, the average response time of all tasks is not concerned with the response time of a single task. For a single task, pay attention to the average response time of each execution rather than the response time of a specific execution. Many of the strategies and techniques used in general operating systems reflect this design principle. For example, due to the use of page replacement algorithms such as LRU in the memory management mechanism, this allows most of the memory access requirements to be quickly completed through the physical memory, and only a small part of the memory access needs to be completed through page adjustment, but in general, the average memory access time is not greatly improved compared with the absence of the virtual storage technology. At the same time, the benefits of the virtual space can be much larger than the physical memory capacity, therefore, the virtual memory technology has been widely used in general operating systems. There are many similar examples, such as the indirect index query mechanism for file storage locations in UNIX file systems, even the Cache Technology in hardware design and the dynamic branch prediction technology of CPU also reflect this design principle. It can be seen that the design principle focusing on average performance, that is, statistical performance characteristics, has a profound impact.


Real-time Operating System (RTOs) is a real-time operating system that can be accepted and processed quickly when external events or data are generated, the processing result can control the production process or quickly respond to the processing system within the specified time, and control the operating system for coordinated and consistent operation of all real-time tasks. For real-time operating systems, we have mentioned that in addition to meeting the functional requirements of applications, it is more important to meet the real-time requirements of applications, however, the real-time tasks that make up an application have different requirements for real-time performance. In addition, there may be some complex associations and synchronization relationships between real-time tasks, such as the execution order limit and mutex access requirements for shared resources, which makes it very difficult to guarantee the real-time performance of the system. Therefore, the most important design principle for a real-time operating system is to use various algorithms and policies to ensure the predictability of system behavior ). Predictability means that resource allocation policies of Real-time Operating Systems can compete for resources (including CPU, memory, and network bandwidth) at any time and under any circumstances during system operation) allows you to allocate resources to multiple real-time tasks, so that the real-time requirements of each real-time task can be met. Unlike general operating systems, real-time operating systems focus not on the average performance of the system, but on the requirement that each real-time task must meet its real-time requirements in the worst case. That is to say, real-time Operating systems focus on individual performance, which is more accurate to individual performance. For example, if the real-time operating system uses the standard virtual storage technology, the worst case for a real-time task execution is that every access to memory requires page adjustment, the accumulated running time of the task is unpredictable in the worst case, so the real-time performance of the task cannot be guaranteed. It can be seen that the widely used virtual storage technology in general operating systems is not suitable for direct use in real-time operating systems.


Because the Basic Design Principles of the real-time operating system and the general operating system are very different, there are great differences in the selection of many Resource Scheduling Policies and the methods implemented by the operating system, these differences are mainly reflected in the following points:

(1) Task Scheduling Policy:

In general operating systems, task scheduling policies generally adopt priority-based preemptive scheduling policies. For processes with the same priority, the time slice rotation scheduling method is used, user processes can dynamically adjust their priorities through system calls, and the operating system can also adjust the priority of some processes as needed.

Currently, the most widely used Task Scheduling Policies in real-time operating systems can be divided into two types: static table-driven and fixed-priority preemptive scheduling.

A static table driver is used to generate a task running schedule manually or with the help of auxiliary tools before the system runs according to the real-time requirements of each task, this schedule is similar to the train running schedule, indicating the start time and length of each task. Once generated, the running schedule does not change, the Scheduler only needs to start the corresponding task at the specified time according to the table. The main advantages of the static table driver mode are:

1. The running schedule is generated before the system runs. Therefore, you can use complicated search algorithms to find a better scheduling scheme;
Ii. Low overhead of the scheduler during running;
3. The system has good predictability and convenient real-time verification;

The main disadvantage of this method is that it is not flexible. Once the demand changes, the entire running schedule must be regenerated.
This method is mainly used in fields with strict real-time requirements on systems, such as aerospace and military systems.

The preemptible scheduling method with a fixed priority is similar to the priority-based scheduling method used in general operating systems. However, in the preemptible scheduling mode with a fixed priority, the priority of a process is fixed, this priority is specified by a priority allocation policy (such as rate-monotonic and deadline-monotonic) before running. The advantages and disadvantages of this method are exactly the opposite to those of the static table drive method. It is mainly used in some simple and independent embedded systems. However, with the continuous maturity and improvement of the scheduling theory, this method will be gradually applied in some fields with strict real-time requirements. Currently, most real-time operating systems on the market use this scheduling method.


(2) memory management:

We have discussed the virtual storage management mechanism above. To solve the unpredictability of virtual storage to the system, the real-time operating system generally adopts the following two methods:

1. The page lock function is added based on the existing virtual storage management mechanism. You can lock the Key page in the memory, so that the page will not be swapped out by the swap program. The advantage of this method is that it not only obtains the benefits brought by the virtual storage management mechanism for software development, but also improves the predictability of the system. The disadvantage is that the design of TLB and other mechanisms is based on the average performance principle, so the system's predictability cannot be fully guaranteed;

2. Use static memory division to divide a fixed memory area for each real-time task. The advantage of this method is that the system has good predictability, but the disadvantage is that the flexibility is not good enough. Once the memory requirements of tasks change, the memory needs to be re-divided, in addition, the benefits of the virtual storage management mechanism are also lost.

Currently, real-time operating systems on the market generally adopt the first management method.

(3) interrupt handling:

In general operating systems, most of the external interruptions are enabled, and the interrupt processing is generally completed by the device driver. Because user processes in general operating systems do not have real-time requirements, interrupt processing programs directly interact with hardware devices, which may have real-time requirements, therefore, the priority of the interrupt handler is set to be higher than that of any user process.

However, it is not appropriate for the real-time operating system to adopt the above interrupt processing mechanism. First, external interruption is the input from the environment to the real-time operating system. Its frequency is related to the environment change rate, but not to the real-time operating system. If the frequency of external interruptions is unpredictable, the time overhead for a real-time task to be blocked by the interrupt processing program during running is unpredictable, so that the task's real-time performance cannot be guaranteed; if the frequency of external interruptions is predictable, once the frequency of an external interruption exceeds its predicted value (for example, the false interruption signal generated by a hardware failure or the predicted value itself is incorrect) this may undermine the predictability of the entire system. Second, user processes in the real-time operating system generally have real-time requirements. Therefore, the priority of Interrupt Processing programs is higher than that of all user processes.


An interrupt processing method suitable for real-time operating systems is: in addition to clock interruptions, all other interruptions are blocked, and the interrupt processing program changes to periodic round robin operations, these operations are performed by the core device driver or the user-mode device support library. The main advantage of using this method is that it fully guarantees the system's predictability. The main disadvantage is that the response to environmental changes may not be as fast as the above interrupt handling method, in addition, the polling operation reduces the CPU usage to a certain extent. Another feasible method is to use the interrupt mode for external events that fail to meet the demand by using the polling mode, and the polling mode is still used for other times. At this time, the interrupt handler has the same priority as other tasks. The scheduler schedules ready tasks and interrupt handlers in a unified manner based on their priorities. This method accelerates the response of external events and avoids the second problem caused by the above interruptions. However, the first problem still exists.


In addition, in order to improve the predictability of the clock interrupt response time, the real-time operating system should avoid interruptions as little as possible.

(4) mutex access to shared resources: 

Common operating systems generally use semaphores to solve the problem of mutual access to shared resources.
For real-time operating systems, if the task scheduling adopts the static table-driven approach, the problem of mutex access to shared resources has been taken into account when the running schedule is generated. If the task scheduling adopts the priority-based approach, the traditional semaphore mechanism can easily cause priority inversion during system running (priority inversion ), that is, when a high-priority task accesses shared resources through the semaphore mechanism, the semaphore has been occupied by a low-priority task, this low-priority task may be preemptible by some other medium-priority tasks when accessing shared resources. As a result, high-priority tasks are blocked by many tasks with lower-priority tasks, which is hard to guarantee real-time performance. Therefore, in real-time operating systems, the traditional semaphore mechanism is often extended, and the Priority Inheritance Protocol (such as priority) is introduced.
Inheritance Protocol, priority ceiling protocol, and stack resource policy effectively solve the problem of priority inversion.

(5) time overhead for system calls and internal operations:

The process gets the services provided by the operating system through system calls, and the operating system completes some internal management work through internal operations (such as context switching. To ensure system predictability, the time overhead of all system calls and internal operations in the real-time operating system should be bounded and the boundary is a specific quantitative value. These time overhead are not limited in general operating systems.

(6) reusability of the system: 

In general operating systems, core-state system calls are often not reentrant. When a low-priority task calls a core-state system call, a high-priority task that arrives within this period of time must wait until the low-priority system call is completed to obtain the CPU, which reduces the system's predictability. Therefore, core-state system calls in real-time operating systems are often designed to be reentrant.

(7) auxiliary tools:

The real-time operating system provides additional auxiliary tools, such as the execution time estimation tool for real-time tasks in the worst case, and the system's real-time verification tool, it can help engineers verify the system in real time.

In addition, the real-time operating system also puts forward some requirements for the system hardware design, such:

DMA

DMA is a data exchange protocol, which is used to exchange data between memory and other external devices without the involvement of CPU. One of the most common implementation modes of DMA is called cycle stealing. This is the first way to control the bus against CPU competition through the bus arbitration protocol, after obtaining control, data is exchanged based on user preset operation commands. Because this type of periodic theft brings unpredictable additional congestion overhead to user tasks, real-time operating systems often require that the system design do not adopt DMA or adopt some more predictable and better DMA implementation methods, such as time-slice method.

Cache
The main function of cache is to use fast storage components with relatively small capacity to compensate for performance differences between high-performance CPU and memory with relatively low performance, it can greatly improve the average performance of the system, so it is widely used in hardware design.

However, real-time operating systems do not focus on average performance, but on individual worst-case performance. Therefore, the worst case of real-time task running must be considered when real-time verification is performed on the system, that is, the running time of each cache request does not hit the cache. Therefore, when using auxiliary tools to estimate the execution time of real-time tasks in the worst case, all the cache functions in the system should be temporarily disabled, activate the cache function when the system is running. In addition, another extreme practice is that the cache technology is not used in the hardware design.

Note 1: embedded systems are systems that integrate applications and operating systems with computer hardware. Simply put, the application software of the system is integrated with the hardware of the system, similar to the tens of thousands of operating models with BIOS. This system has the characteristics of small software code, high automation, fast response speed, and so on. It is particularly suitable for systems that require real-time and multi-task processing.

Note 2: The real-time multi-task operating system (real-time operating system) is based on the operating characteristics of the operating system. Real-time refers to the real time of a physical process. A real-time operating system is an operating system that supports real-time system operations. The primary task is to schedule all available resources to complete real-time control tasks, and then focus on improving the usage of computer systems. An important feature of this system is to meet the time restrictions and requirements.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.