Operating System Learning Notes (vii)

Source: Internet
Author: User
Tags data structures hash
Memory Management hierarchical structure of memoryFor a general-purpose computer, the storage hierarchy has at least three levels: CPU registers, main memory, secondary memory. The more upscale computers are subdivided into six layers: registers, caches, main memory, disk caches, disks. Removable Storage media.
The higher the hierarchy the faster the access, the more expensive the price.
The primary memory, or memory, or main storage, is used to save the data of the process runtime and also becomes the executable memory. The CPU control components can only obtain instructions and data from the main storage and then load them into memory. or from the register into main memory.
Registers, the access speed will be fully coordinated with the CPU, but the price is very expensive.
Cache: When the CPU accesses a specific set of data, it always queries the cache for data that is needed, and if so, reads the information from main memory.
Disk cache, because the current disk's IO speed is much lower than the stored access speed, so a portion of the disk data and information that is frequently used is temporarily stored in the disk cache to reduce the number of times the disk is accessed. The disk cache is backed by a fixed disk. When it needs to be run or accessed, it is transferred into main memory.

Loading and linking of programsThe first step in creating a process is to load the program and data into memory if the program must create a process for it to run.
How to turn a source program into a program that can be executed in memory requires the following steps: Compiling, linking, loading the program.
The loading of the program is divided into three different ways:
(1) Absolute loading mode: If the implementation knows where the program will reside, then give it directly address, only applicable to the single-channel program environment.
(2) relocatable mount mode: Loads the module into the appropriate location in memory, depending on the current situation of the memory. But for address transformation, the address transformation is completed at the time of loading, so it is called static relocation.
(3) Dynamic fashion mode: The address is not converted until the program executes.

There are three ways to link programs:(1) Static Link:
Before the program runs, the target modules and their required functions are linked into an assembly module, which is no longer disassembled.
(2) dynamic Link when loading:
Loading the memory time-varying edge loading edge link, that is, loading a target module, an external module call event occurs, will cause the loader to find the appropriate external module and load it into memory.
(3) dynamic link at runtime:
Links to certain modules are deferred until the program is run, and modules that are not used during execution are not transferred into memory.


Continuous Distribution Mode single continuous allocation:Divides the memory into the system area and the user area two parts, the system area provides to the OS to use, generally puts in the memory low address, the user area refers to except the system area all memory space, provides to the user to use. Only suitable for single-channel program operating environment.
fixed partition allocation:Divides the memory user space into several fixed-size partitions, each of which is loaded into a single job. There are several partitions so that several jobs can run concurrently.
Dynamic Partition allocation:Allocate memory dynamically, based on actual needs. Data structure in allocation: idle partition table, idle partition chain.
Partition allocation algorithm: First adaptive algorithm, cyclic first adaptation algorithm, best adaptive algorithm, worst fit algorithm, fast adaptation algorithm
Partition-Assigned operations: allocating memory, reclaiming memory.
Partner System:Both fixed and dynamic partitioning methods have shortcomings, and the two tradeoffs of partner systems. The partner system specifies that, regardless of the allocated partition or the free partition, the size is 2 k power, K is an integer, 1<=k<=m,
Where: 2^1 the minimum partition size allocated, 2^M represents the size of the allocated maximum partition.
Hashing algorithm:Using the advantages of fast hash lookup and the distribution of free partitions in the available spatial table, a hash function is established to construct a hash table with the size of the free partition as a keyword.
to relocate a partition:A number of small partitions are stitched into a large partition, allowing the user program to be inserted. However, it is necessary to modify (transform) The program and the data address after the move.
Swap:Put a process or data that is temporarily unavailable in memory into external memory, freeing up enough space for a process or process that already has a running condition, and the program and data required to transfer it into memory.
Management of the swap space:
The external memory is divided into files and swap areas, the former user store files, the latter is used to store the process of swapping out from memory,

How to manage the underlying paging storageContinuous distribution forms Many fragments of memory, although it is possible to stitch many fragments into a usable large space in a compact way, but it has to be costly, so this approach allows a process to be split directly into many non-
Adjacent in-memory.

Page and Page table:
Paging storage management is to divide a process's logical address space into a number of slices of the same size, called pages or slices, numbered from 0 onwards. The corresponding memory space is also divided into the same size of the page storage fast, called the physical block
Or a page box, and the same number for them. A page table is a collection of mappings of page numbers to physical addresses for easy system lookups.
Address transformation Mechanism:
In order to convert the logical address of the user address space to the address in physical memory, it is necessary to have an address translation mechanism, which is divided into a basic address translation mechanism and an address translation mechanism with a fast table.

Basic segmented Storage Memory management method:Segmented management can meet the needs of users and programmers: Easy programming, information sharing, information protection, dynamic growth, dynamic linking
Basic principles of segmented systems:
Segmentation:
The address space of a job is divided into small segments, each defining a set of information, which also requires a mapping table from logical segments to physical memory.
Information sharing:
One of the outstanding advantages of segmented systems is the ease with which segments can be shared, that is, allowing several processes to share one or more segments, and the protection of segments is simple and easy.
Reentrant code, also known as pure Code, is a code that allows multiple processes to be accessed at the same time reentrant. During the run time, the reentrant code is not allowed to change. When implemented, the process changes the part that may change
To its private data area, modifying its own internal data without modifying the shared code.

Basic concepts of virtual memoryThe memory described above requires that a job be loaded into memory before it can be run, sometimes the job memory is too large to be loaded into memory can cause the job will not run, so need to logically expand the memory capacity

introduction of virtual memory characteristics of conventional storage management methods:(1) One-time: All the data of the process is loaded into memory, regardless of whether it is used after the run, resulting in a lot of memory waste.
(2) Residency: Once the job has been loaded into memory, it resides in memory.

principle of locality:The program will show the local regularity when executing
(1) Most cases are executed sequentially
(2) Procedures are confined to certain processes for a period of time
(3) There are many cyclic structures in the program
(4) The processing of data structures in the program is limited to a good range
There are also time limitations and space limitations

The implementation of virtual memory is based on the method of storage management of discrete allocation, so all virtual storage is implemented in one of the following ways:
(1) Paging request system
On the basis of the paging system, the page virtual storage system which is formed by the request paging function and the page permutation function is added. The system must therefore provide the necessary hardware support and software support
Hardware support: Page table mechanism of request paging, fault mechanism, address transformation mechanism.
Software support: Includes software for implementing request paging and for page replacement.
(2) Request staging system
It is formed by adding several items on the basis of the section table mechanism of pure segmentation. It also requires a missing segment interrupt mechanism and an address change mechanism.

characteristics of virtual memory:Virtual memory has three main features: multiple, swap and virtual.
(1) Multiple sex: Refers to a job can be divided into memory multiple times.
(2) Swap: Allow the operation to be swapped in and out. The data and programs that are not needed are paged out and will need to be transferred into memory.
(3) Virtualization: Logically expand the memory, so that the user sees more memory than the actual memory capacity.

Request a paging storage management method:Memory allocation strategy and allocation algorithm: Three problems need to be solved: the minimum physical block, the physical block allocation strategy, the physical block allocation algorithm
(1) Determination of the minimum number of physical blocks
The minimum number of physical blocks required to ensure that the process is running properly. The minimum number of physical blocks a process should have is related to the hardware structure of the computer. If the page fault is interrupted by 6 interrupts, at least 6 physical blocks must be allocated for each process.
(2) Allocation strategy of physical blocks
In the request paging system, two memory allocation policies, fixed and variable, can be taken. There are two strategies that can be used in a permutation, namely, global displacement and local displacement. You can then combine the following three appropriate strategies:
Fixed allocation local displacement, variable allocation global displacement, variable allocation local displacement
(3) Physical block allocation algorithm
Average distribution algorithm, proportional allocation algorithm, priority allocation algorithm
Paging Policy:
To determine when the system is running out of time when a page is paged into memory, you can either take a pre-paging policy or request a paging policy.
Pre-paging policy: Use a predictive-based pre-paging policy to pre-tune pages that are expected to be accessed in the near future, with only 50% of the probability of a pre-tuning page being successful.
Request paging Policy: When the process needs to access some programs and data in the run, if it finds that its page is not in memory, it requests it immediately, and the OS calls its desired page into memory. Most of the current virtual memory uses this strategy.

Determine where the page is to be transferred into
In the request paging system external memory divided into the file area and the swap area, the swap area adopts the way of continuous distribution, the file area adopts the method of discrete distribution.
When a missing pages request occurs, the memory is transferred into the following three scenarios:
(1) When the system has enough swap space, all the required pages are transferred from the swap area to improve the paging speed.
(2) The system lacks enough swap space, the file will not be modified directly from the file area, it will probably be modified to the swap area.
(3) UNIX mode: For pages that have been run but have been swapped out from the swap area, usually the pages that are not running are transferred from the file area

page replacement algorithm:In order to determine which page to pick up the memory algorithm when the memory is full
(1) Optimal permutation algorithm:
The page that it chooses to retire is a page that will never be used in the future, or that is no longer accessible in the longest (future) time.
(2) FIFO page replacement algorithm:
Always eliminate the first page that goes into memory
(3) The most recent unused algorithm for LRU:
Record the time t that a page has experienced since it was last accessed, and when a page must be eliminated, select the one with the maximum T value
(4) Simple clock replacement algorithm:
Set one access bit for each page, and then link all the pages in memory to a loop queue through the link pointer. When a page is accessed, the access bit is 1.
(5) Improved clock replacement algorithm:
On the basis of the previous one, add another factor, that is, the replacement cost. That is, when a page is swapped out, it is both an unused page and a page that has not been modified.
Other algorithms have a minimal use of the permutation algorithm, the page buffer algorithm.

Request a segmented storage management method sharing and protection of segments:Segmented storage management facilitates segmented sharing and protection. The following is further described in order to achieve segmented sharing, the corresponding data structure shared segment table should be configured, as well as the process of manipulating shared segments.
shared segment table:A shared segment table is configured in the system, and all shared segments occupy a table item in the shared segment table. The segment number, section length, memory address, presence bit, and the status of each process that shares this segment are recorded in the table entry.
(1) Shared process count count
(2) Access control field: That is, different permissions for different processes.
(3) Paragraph No.
allocation and recycling of shared segments:Allocation: When allocating memory for a shared segment, the process that uses the shared segment for the first request is assigned a physical zone for the shared segment and the shared segment is called into the zone. Add a table entry in the shared segment and fill in the relevant data
Place count one, and then each time a process requests access to the COUNT:=COUNT+1
Recycle: When a process that shares this segment no longer needs the segment, the segment should be freed, including related table entries, and execution count:=count-1, and if count=0, the system reclaims the shared segment's physical memory
Segmented protection:In the segmented system, because each segment is logically independent, it is easier to implement information protection. Therefore, you can use the following methods to ensure that information is secure:
(1) Cross-border inspection
(2) Access control check: In each table entry in the section table, an "Access control" field is set up to specify how access to the segment is accessed, typically read-only, execute-only, read/write
(3) Ring Protection Agency
The low numbered rings have the highest priority, the OS core is located on the 0 ring, some important utilities and operating system services are located in the middle ring, the general application is arranged in the outer ring, and the program accesses and calls with the following rules:
11 programs can access data residing in the same ring or lower privileged ring
21 programs can invoke services residing in the same ring or higher privileged ring.



Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.