Memory management of the operating system

Source: Internet
Author: User
Tags bind hash cpu usage

In a multi-channel program, you need to read multiple processes to memory from disk at the same time, we need to manage the memory so that the process can be executed in a coherent manner.
Usually the instruction is read from memory, decoded, and read the operand from memory before returning the result to memory. The memory sees just the address.
A process takes up a piece of memory, and the span is a contiguous set of addresses, and we use base register and limit register to limit the scope of the process access.
CPU-only memory is memory and CPU registers, so if you have instructions to access the data, you must import into memory beforehand. Here's how to swap, and dynamic loading.
Cache: Because access to the CPU register must access memory much faster, so we can not always access memory, so we have to have the cache. Accessing memory may cause the CPU to pause.
The qualification of the address accessed by the process is implemented by hardware: that is, by comparison of base and limit registers.
The OS can change the value of limit and base.
The disk has an input queue that holds the process ready to import the memory.
In the code we usually use a variable to represent the address, we can bind the variable to a relocatable address, and the loader then binds the relocation address to an absolute address.
Conditions for binding:
1. Compile time: If you know the address of the memory at compile time, then bind the absolute address directly.
2. Load time: Generate relocatable code at compile time and generate absolute code at load time.
3. When executing: If you want to move a process from one block of memory to another in execution, you are bound only when you execute it.
The logical address that the CPU generates, and the address of the memory is the physical address.
The address space generated by the program is called the logical address space.
The runtime virtual address to the physical address is mapped through the MMU. The +base register is given a physical address through the logical address of the CPU. The user is not aware of the physical address.

Dynamic loading: If a program must be loaded into memory, then it must limit the size of the program, so dynamic loading is to separate a large program into multiple subroutines, each time a subroutine into memory, if a subroutine needs to be transferred to another subroutine, then load, enhance the memory space utilization.

Dynamic linking: By storing a stub reference language library inside a program, you can avoid duplicating multiple language library copies, and if the version number is different, you can store multiple versions of the library in memory.
--------
If the CPU is going to execute a process that is no longer in memory, and in the backup memory, it needs to be swapped in and out, that is, swapping.
When swapping out the process again, you need to consider location issues:
1. When binding to a physical address at load time, swap in to the same location.
2. Binding at run time, you can swap in to any location.
The exchange time is divided into: transfer time, disk head addressing time.
The swapped out process selection aspect needs to be aware of processes that cannot be selected in the I/O wait queue.
So there is a modified switching mode that can be swapped in case of memory crunch, but stop swapping when CPU usage drops to a certain extent.

In order to manage memory, each process needs to be allocated contiguous memory.
Memory mapping requires that the logical address of the CPU be compared with the limit register first, plus the base register.
When a part of the operating system is not commonly used, it can be swapped out to the backup storage, and this space can be freed for user processes to use.

The simplest memory allocation is to divide the memory into multiple fixed-size partitions, each of which holds a single process.
A mutable partition is an OS that saves a table, records which memory is available, and which memory is occupied.
The free area of memory is called a hole.
When a process enters memory from the input queue, there are several scenarios for selecting holes:
1. Best fit: Iterate over and find the right minimum hole.
2. First-time adaptation.
3. Worst-fit. (I don't know what to do with this.) )
In the first and second method, there is an external fragmentation problem.
External fragmentation problem: After the process into the hole, there is a little more space, but can not put the process.
Internal fragmentation problem: allocating memory by block allocation, such as a process 3K, a hole 4K, the direct 4K to the process, 4-3=1k is the internal fragment.
The way to resolve external fragments is to tighten, squeezing the process together. However, when the bindings are loaded, they cannot be condensed.
Another approach is to focus on the ability to allocate memory as noncontiguous.
50% rule: In the allocated memory, 50% of the space is fragmented.

Paging: Divides the logical address of the CPU into pages and divides the physical memory address into frames. Backup storage is also divided into chunks of the same size. The page number is the first to find the corresponding frame of the page table, plus the page offset to find the memory location.
Divide the CPU logical address into page number and page offset method: First the size of the CPU logical address into 2^M,M is the logical address of the number of bits, set 2^n to the size of a page, then n is the page offset.
M-n is the length of the page number.
Paging can cause internal fragmentation. Because the frame size is fixed, a single allocation is an integer multiple of one frame.
A process consists of several pages, one frame for each page. Therefore, the content of the process does not have to be contiguous, and each process corresponds to a page table. The OS also holds a copy of each process page table.
Frame table: Includes all frames and whether they are idle or occupied.
The Start Page table is placed in the PCB register because it is fast. However, as the page table becomes larger, the page table must be placed in memory, and the PCB simply retains a single page table base register (PTBR), pointing to the head of the page table. Get the frame number plus the page offset and access the memory again. So you have to visit 2 times.
Because the above scenario requires 2 memory accesses, the latency is large, so a quick table (like the cache) is created that preserves a small portion of the page table and can access each element of the TLB at the same time, fast. If not in the TLB, general access is added and the page table is added to the TLB (locality principle). The structure of the TLB is the key is the page number and the value is the frame number.
Sometimes there are ASID in the TLB, address space identifiers (ASID): Uniquely identifying a process, be sure to ASID matches and match the page number.
If ASID is not provided, the TLB saves a process's page table, and each time the process switches, the TLB is flush.

TLB hit rate: The probability that the TLB can be found.

We can grant permissions for each frame, read-only, readable and writable, executable, and so on.
A page table is protected by providing a bit in the page table called a valid invalid bit, or V or I. Represents whether the page number is legal. Because it is possible that a program has only 1-5 page numbers, the page number 6 is set to I. Of course, you can also limit the page table size by PTLR (page table length register).
Paging can share common code, but these public code cannot be modified. Imagine why you can share, because the process only needs to store the unique page table, the page table corresponding to the frame in memory, so only need a program, but need independent data, so the data can not be shared.
There are several ways to organize page tables:
1. Multi-level page table. Forward Map page Table: Map from the external page table to the address incrementally. Thought of the multilevel index. "Page Table partition Quiz"

Origin: Because our address space is more and more large, so the page table is more and more large, so we can not keep the page table of the point, so there is a multilevel page table.
2.hash page table. The page number of the logical address after the hash table to find the corresponding position, hash table each element corresponds to a linked list, an element by the virtual page number, frame number, next composition.
The cluster page table applies to 64-bit address space, and each entry in the hash page table stores a mapping of multiple physical page frames.
3. Reverse Page table. Usually a process has a page table, but occupies a large memory, so the entire system has only one reverse page table, a Reverse page table entry is <pid,page_number> an entry for the virtual address space is <pid,page_number,offset> , but the drawback is that the traversal time is long, but can be combined with the previously mentioned technology, such as TLB, then hash, and then reverse the page table. However, the Reverse page table cannot share memory.

Segmentation is constructed by taking into account the user's view of memory. Consider a program as a segment, with no order, and the user passes the < segment number, segment offset > determine address.
Similar to the page table, segmented through the segment table to implement address mapping, Cong an element corresponding to the base register and the limit register.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.