QNX System Architecture 6-process MANAGER__QNX

Source: Internet
Author: User
Tags posix

The process Manager can create multiple POSIX processes, each of which can contain multiple POSIX threads.

The QNX neutrino rtos,procnto system process includes microkernel, process management modules, memory management modules, and path management modules. So the process Management module is not part of the microkernel. Process Management-Manage process creation, destruction, and process attributes such as UID and GID memory management-manage a range of memory protection capabilities, shared libraries, and POSIX shared memory primitives between processes. Path management-Managing path space

The user process accesses the microkernel directly through system calls, and is managed by sending messages to the Procnto access process. Note that the user process sends a message through the msgsend* () kernel call.

Note that calling the microkernel in Procnto is the same as in other processes. The process management code and the Microkernel share the same address space, which does not mean that it has a specific or proprietary interface. All threads in the system use a consistent kernel interface and a privilege switch occurs when the kernel interface is invoked. Process Management

The first responsibility of Procnto is to create a new process dynamically. These processes will depend on other features of Procnto: memory management and Path management.

Process management includes process creation and process destruction, as well as management of process attributes such as process ID, process group ID, and user ID. Process Primitives


Process Loading

Load the process from the file system using exec* () Posix_spawn or Spawn.

If the file system is stored on a block device, the code and data are loaded into main memory. By default, the memory page containing the binaries is loaded on demand, but you can change it using the PROCNTO-M option; For more information, see Locking memory Summary in this chapter

If the file system is memory-mapped, such as Rom/flash image, the code does not need to be loaded into RAM, but is executed directly in the storage medium. This method allocates memory for the data and stacks in RAM, and the code is on ROM or flash.

Regardless of the form in which the code is stored, the code is shared if the same process is loaded multiple times. Memory Management

Some real-time kernels provide memory protection support in the development environment. As memory protection becomes increasingly popular on embedded processors, the cost of introducing memory management is increasingly trivial.

The greatest benefit of increasing memory protection in embedded applications, especially in mission critical systems, is to improve the robustness of the system.

With memory protection, if a process in a multitasking system attempts to illegally access memory, the MMU hardware notifies the OS and the system abort the thread.

This prevents misuse of the memory address space between processes, preventing a process-error code from destroying another process or even the OS's memory. This protection is very important for and integration of real-time systems, since this makes hindsight known as possible.

In the development phase, code errors, such as wild pointers and array crossings, can cause a process or thread to break the data space of another process. If the overwritten memory is not used in a short period of time, the error becomes more difficult to track and can take hours of complex debugging, such as using a circuit simulator or a logic analyzer to find the perpetrator.

By enabling Mmu,os to ignore the illegal memory access of the process, and immediately provide feedback to the programmer to avoid the system at some time after the mysterious crash. The OS can provide illegal access to command locations, and even debug symbols for illegal directives. Memory Management Units (MMUs)

A typical MMU operation that divides the physical memory into the 4-KB page. The processor hardware uses a page table saved in system memory to define a mapping of the virtual address to the physical address of the CPU.

When the thread executes, the OS-managed page votes how the logical addresses within the thread are mapped to the processor's physical memory.


Figure 32:virtual Address mapping (on a x86)

The number of page table entries that need to describe this mapping becomes significant and cannot be saved in the processor if the system has a lot of threads and processes and the address space is large. To ensure system performance, the processor caches frequently used page table entries into the TLB.

The TLB cache may cause cache Misses,os to try to avoid the resulting performance loss.

The properties of the page in memory are defined in the Pages table entry. Pages can be read-only, read-write, and so on. Typically, an executable process typically marks a code page as read-only, data, and STATCK as writable.

When the OS performs a context switch (for example, a process performs a suspend operation, another process is resumed). This will manipulate MMU to use a different set of page tables for the new process. If the OS is switching between two threads in a process, then MMU refreshes are unnecessary because two threads of the same process share the same address space.

When a new process resumes execution, any address translation of the new process is generated from the new page table. If a thread attempts to access an unmapped address, or attempts to access an address without complying with page access, the CPU receives a fault error (similar to a 0-fault), and the OS implements a special type of interrupt.

By checking the command address in the interrupt stack, the OS can determine the instruction address that triggers the fault and make further processing. Memory protection at run time

Memory protection is useful in the development phase, and it provides the reliability of embedded systems. Many embedded systems have used hardware watchdog to detect whether software or hardware is out of control, but this approach lacks accuracy compared to MMU.

Hardware watchdog is usually implemented as a trigger timer, and if the system software does not have a regular reset timer, the timer timeout will trigger the processor reset. Typically, system software components will check system integrity and update the clock indication system to work well.

Although this approach can restore the system from software or hardware errors, the system is not available for a period of time due to a reboot of the system. Software watchdog

In a memory protection system, when an intermittent software error occurs, the OS captures the time and transfers control to a user thread instead of a memory dump mechanism. This thread can further determine how to recover from failure, rather than the simple and crude reset system. Software watchdog can: Abort a process that triggers a memory access failure and restart the process, rather than shutting down the rest of the system. Abort the failed process and associated processes, initialize the hardware to a secure state, and then restart the failed process and associated processes. If failure is critical, shut down the entire system and issue a sound warning.

The most important difference here is that we retain intelligent, programmable control, even if the individual processes and threads that control the software fail for some reason. The hardware watchdog can still be used to recover the system.

When we perform some kind of recovery strategy, the system can collect various information about the software failure. For example, if an embedded system contains or accesses mass storage, the software watchdog can generate dump files sorted by event. We can use this dump file to do the time diagnostics.

Embedded systems typically use this partial restart method to deal with intermittent software failures without allowing the user to experience system downtime or even notice that these fast-recovery software fails. Because dump files are available, software developers can detect and discover software problems without the need to arrive at the site at any time. If we compare this method with the hardware watchdog method, obviously we tend to be the former. Post-dump file analysis is very important for mission-critical embedded systems. Whenever an emergency system fails, it is necessary to try to find the source of the failure so that it can be repaired and applied to other systems.

Dump files contain the information the programmer needs to fix the system, and without the dump files, the programmer is no more than the user who touches the system crashed. Quality Control
full-protection Model

Our fully protected mode redirects all the code in the mirror to the new virtual space, enabling you to MMU the hardware and reset the page table mappings. This allows Procnto to start a correct mmu-enabled environment. The process manager then receives the environment and changes the page table mappings that the process requires. Private Virtual Memory

In full protection mode, each process is given its own virtual address space, in general 2 ~ 3.5GB, process switching and message delivery performance costs are affected by the complexity of address space handoff in two private address spaces.


Figure 33:full Protection VM (on a x86)

The memory that each process spends on the page table may increase 4kb~8kb. Note that this memory model supports POSIX fork () calls. Variable Page Size

The virtual Memory Manager can use a variable page size, provided that the processor supports this feature.

You can improve performance by using the variable page size: You can increase the size of page size, greater than 4KB. This system can use fewer TLB entries, and the TLB misses will become less

If you want to turn off the variable page size feature, you can edit the-M~V option in Procnto's BuildFile, and the-MV option is to make the page size variable. Locking Memory

QNX supports POSIX memory locks, so a process can avoid page delays by locking the memory corresponding page, which is called locking, which is allocated, and is not allowed to swap into the swap partition.

Locks are divided into several levels unlocked

Unlocked memory can be swapped in and out. The memory was allocated, but the page table item was not created. The first access to memory may fail, and the thread waits in the Waitpage state until the memory manager initializes the memory and creates a page table entry. Locked

Locked memory cannot be swapped in or swapped out. Although page faults can still occur when accessing or referencing, maintain usage and modify statistics. The user thinks that the Prot_write page may still be prot_read. In this way, the kernel receives a warning when it first writes to the Map_private page: The Map_private page must now be privatized.

Lock or unlock the memory area of a thread, you can call Mlock () and Munlock (), lock or unlock thread all memory areas, and you can use Mlockall () and Munlockall (). The memory remains locked until the process unlocks, the process exits, or the exec* () function is invoked. If the process calls fork (), posix_spawn* (), or the spawn* () function, the memory lock is freed in the child process.

Multiple processes can lock the same memory area, and the memory remains lock until all processes unlock it. The memory lock does not support stack, and if a process locks the same section multiple times, then unlock can cancel all lock operations before the process.

Lock all applied memory, Procnto use the-ML option. So all of the pages are initialized at least.
Superlocked

Faulting are not allowed to occur, all memory is initialized during mapping, privatized, and permissions are set. Supoerlocking overwrites the entire address space of the thread.

Supoerlock all the memory for all applications, Procnto identifies-ML options.

For Map_lazy mappings, the above type of memory is not allocated until the first reference. Once referenced, the above rules will be followed. If you use a Map_lazy area that has never been referenced in a critical code area (such as a stop interrupt or in an ISR), then the programmer is responsible for that. Defragmenting physical Memory

Most computer users are familiar with the concept of disk-free collation, and over time, the free space of the disks is divided into many small pieces, scattered between the used blocks. There are similar problems with physical memory allocation and release, and the system's physical memory is gradually fragmented over time. Ultimately, although the total amount of free physical memory is very large, the request to allocate a block of contiguous physical memory failed due to fragmentation.

Typically, a device driver that uses DMA requires continuous physical memory. One solution is to ensure that all drivers are initialized as early as possible (before fragmentation occurs) and that the required memory is obtained. This is a bad limitation, especially for an embedded system that may only start the appropriate driver based on the user's actions, and start all possible drivers, looking rather inflexible and wasteful of resources.

QNX uses a set of allocation and recovery algorithms to significantly reduce the occurrence of memory fragmentation. Then, no matter how subtle the algorithm is, specific behavior can still cause memory fragmentation.














Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.