Operating System Learning notes: memory management

Source: Internet
Author: User
Tags hash requires valid

Memory is the center of modern computer operation (actually not cpu!). Memory has a large set of words or bytes, and each word or byte has an address.

The CPU extracts instructions from memory based on the program counter. The memory that the CPU has direct access to is only the registers and memory within the processor. Normally, the program is stored on disk, and when executed, the program is called into memory. So how does the CPU find them?

In general, there are several scenarios in which you bind directives and data to memory addresses:

1) At compile time, the process's resident address in memory has been determined, generating absolute code

2) when loading, the compiler generates the relocation code, and the binding is deferred until it is loaded

3) When executing, most operating systems use

I. Background

1. Logical address and Physical address

The address generated by the CPU is called the logical address, and the corresponding address of the memory unit (that is, the address loaded into the memory address register) is called the physical address. The mapping from the logical address to the physical address is done by the hardware Device (memory management Unit, MMU). Users generate only logical addresses that must be mapped to physical addresses before they are used. The user program never sees a real physical address.


2. Dynamic Loading and DLL

If the entire program and data of a process must be in physical memory, the size of the process is limited by the size of the physical memory. You can use dynamic loading, and the subroutine is loaded only when it is called. Dynamic loading requires no special support from the operating system, which is the work of the programmer.

In other, you can use DLLs. DLLs, unlike dynamic loading, typically require the help of the operating system, because DLLs can be shared by multiple processes, and only the operating system can check the memory space of other processes.

DLLs have version differences, and different versions of libraries can be loaded into memory.


Second, Exchange

The process needs to be in memory for execution, but the process can swap out (swap) temporarily from memory to the backup storage and back in memory when it needs to be executed again.

The exchange has some policies, such as priority-based.


three, continuous memory allocation

Memory is typically divided into two zones: Host OS + user process.

It is often necessary to put multiple processes in memory at the same time, so you need to consider how to allocate memory space for the processes in the input queue. With continuous memory allocation, each process is in a contiguous area of memory.

One of the simplest ways to allocate memory is to divide the memory into multiple fixed-size partitions, where each partition can hold only one process. This method is now no longer in use.

An upgraded version of a fixed partition is a variable partition, and the operating system has a table that records the usage of memory. Initially, all memory is available to the user process as a large chunk of available memory, called a "hole." When a new process requires memory, find a hole large enough for it. If found, the rest of the hole can be used the next time it is allocated from the hole.

The algorithm for finding holes has

1) First adaptation. Find the first hole that is big enough to stop.

2) Best fit, full table scan, find the smallest and the right piece.

3) worst fit, full table scan, find the largest and the right piece.

Of the three algorithms, the worst adaptation is the worst, and the first adaptation is a little faster than the best fit. However, the external fragmentation problem (the partitioning method, which produces external fragments, and the internal fragmentation of pages) is also more severe than the worst-fit. The solution is to tighten (by, SQL Server also has fragmentation problems, but also by index rebuild or shrink to solve). The so-called austerity is moving memory content so that free space is merged into a piece. But not all cases apply to austerity. If the physical address relocation of the memory is static, it is not a problem when it is assembled or mounted, but it is too expensive to run at runtime.

Another solution is to allow the physical address to be non-contiguous. This scenario is the following: pagination and fragmentation.


Four, pagination

Paging allows the physical address space of a process to be non-contiguous.

Paging avoids the hassle of matching memory blocks of different sizes to swap spaces. Because backup storage also has memory-related fragmentation issues, and access is slower, it is not appropriate to merge. Paging, in turn, divides physical memory into blocks of fixed size to avoid external fragmentation. Paging is used by most operating systems because of its superiority.

Paging is supported by hardware. However, the latest trend is to work with hardware and OS, especially on 64-bit microprocessors.

1. Basic Methods

In a paging scheme, physical memory is divided into blocks of fixed size, called frames, and logical memory is also divided into blocks of the same size, called pages. Similarly, backup storage has blocks of the same size. The size is determined by the hardware, typically a power of 2, to facilitate the translation of logical addresses into page numbers and page offsets.




P: Page number

D: Page offset

F: Frame number

Each address generated by the CPU is divided into 2 parts: page number (p) and page offset (d). P as the index of the page table, the page table contains the base address of the physical memory where each page resides, and the base site + page offset = Physical address. As shown in figure


The use of paging technology does not produce external fragments, but may also be internal fragments. Because the memory required by the process is not necessarily an integer multiple of the page. It is a question as to how much value the page takes.

One feature of paging is the separation of memory and actual physical memory from the user's perspective. From the user's point of view, the user program handles the memory as a whole block, and the entire memory only exists for itself. But in fact, there are a variety of processes above, and the physical address distribution may be discontinuous.

The difference is that the mapping is transformed by hardware. All of this is transparent to the user, but is controlled by the operating system. The user program cannot cross-border access and cannot access memory other than the provisions of its page table.

Because the operating system manages physical memory, it must be clear about the allocation details of physical memory, the use of frames, the number of available, the total, and so on. This information is usually stored in the data structure of the frame table.


2. Hardware Support

How the control system maintains the page table.

The vast majority of each process is assigned a page table, and the page table pointer is stored in the process control block along with other registers.

Page tables can be saved with a dedicated set of registers, which is the most efficient method. This is only true if the page table is not small.

If the page table is very large, such as 1 million entries, it needs to be stored in memory, using the page table base register to point to the page table. Change the base register to switch the page table, very fast.

The problem with this approach, however, is that access to a byte now requires 2 memory accesses, which are halved relative to the original speed. This delay is intolerable. The solution to this problem is to use hardware buffering: Conversion table buffers, TLB. The TLB maintains only a small subset of entries in the page table, and the logical address translates the physical address process, first in the TLB, and if found, the physical address is at your fingertips, and if not in the TLB, the substitution algorithm is used to replace the related entry into the TLB and then the physical address.


3. Protection

In a paging environment, memory protection is achieved through the associated protection bits for each frame. Typically, these are located in a page table.

This bit, which defines whether a page is read-write or read-only.

You can also define a valid-invalid bit. When the bit is valid, it means that the page is in the logical address space of the process and is a valid page, otherwise it is an illegal address. A process rarely uses all of its address space and uses only a small portion.


4. Share page

Another benefit of paging is the ability to share common code.


v. Structure of the page table

1. Hierarchical Page Table

For a process, a page table can already correspond to a number of entries, for example, 1 million, that is, 1 million memory addresses. If not enough, then the page table can be layered again, the page table again pagination. The original logical address is

Page + page offset

, now the page table is layered, it becomes the page number + page offset + page offset




2. Hash Page Table

A common way to handle more than 32-bit address space is to use a hash page table.

The entry for the hash page table consists of a list of elements, each element having 3 fields: 1) virtual page 2) physical frame number 3) pointer to the next element in the list. Where the virtual page number is a hash value. (The so-called virtual page number, should be page, page number)

The algorithm mainly obtains the hash value by the page number, obtains the corresponding entry in the hash table, finds the physical frame number.


3. Reverse Page Table

In order to avoid the page table entries must be listed in the page table, resulting in a large page table bloated problems, you can also use the Reverse page table. The frames that are actually used have an entry in the Reverse page table. The whole system has only one page table, only one entry for each physical memory frame, the entry contains the address of the corresponding logical memory page, and the process number. The main problem is that memory sharing is a bit cumbersome.


Vi. Subparagraph

The problem with paging is the separation of memory and actual physical memory from the user's perspective, and the logical memory needs to be mapped to physical memory.

But for our programmers, memory is a collection of variables, objects, called by names, pointers, not where they are located.


Fragmentation (segmentation) is a memory management scheme that supports this user perspective. The logical address space consists of a set of segments. Unlike paging, the biggest difference is the non-fixed size. The rest of the physical address based on the segment number + offset does not seem to differ greatly from paging.

The address space of a process is divided into segments, and each segment defines a set of logical information. Examples of program segments, data segments, and so on. Each segment starts at 0 and takes a contiguous address space. Therefore, in a staging scenario, the logical address is two-dimensional.


Seven, the main difference between pagination and segmentation


There are many similarities between pagination and segmentation, such as neither of which requires continuous storage. But conceptually the two are completely different, mainly in the following areas:


(1) The page is the physical unit of information that is paged out for non-contiguous allocation in order to resolve memory fragmentation issues, or paging is due to system management needs. The segment is the logical unit of information, it contains a set of relatively complete information, the purpose of segmentation is to better realize the sharing, to meet the needs of users.


(2) The size of the page is fixed, determined by the system, the logical address is divided into the page number and the page address is implemented by the machine hardware. While the length of the segment is not fixed, it is decided by the user to write the program, usually compiled by the compiler when the source program compiled according to the nature of the information to divide.


(3) The paging job address space is one-dimensional. The address space for a segment is two-dimensional.

For example, you go to a lecture and take notes with a paper notebook. Notebook has 100 sheets of paper, the course has Chinese, math, English three, for the use of this notebook, in order to facilitate the review later, you can have two options.

The first is that you start with the first sheet of paper, and in advance in the book to make a division: the 2nd to the 30th piece of paper in Chinese notes, 31st to 60th paper Math notes, 61st to 100th paper in English notes, and finally in the first paper to make a list, record three notes of their respective ranges. This is the segmented management, the first piece of paper called the paragraph table.

The second is, you take notes from the second piece of paper, the notes of the various classes are linked together: the 2nd piece of paper is math, the 3rd is Chinese, 4th English ... Finally, you made a catalogue in the first piece of paper, recording the Chinese notes in the 3rd, 7, 14, 15 sheets ..., math notes in 2nd, 6, 8, 9, 11 ..., English notes in 4th, 5, 12 ... This is the paging management, the first piece of paper called the page table. You have to review which course, go to the page to search for the relevant paper number, and then turn to that page to review


Four. Segment-page Storage Management

1. Basic idea:

The paging system can effectively improve the utilization of memory, and the segmented system can reflect the logical structure of the program, facilitate the sharing and protection of the segment, combine the two storage methods of pagination and segmentation, and form a section-page storage management mode.

In a section-page storage management system, the address space of a job is first divided into logical segments, each with its own segment number, and then divided into several equal-sized pages for each segment. For main memory space is also divided into equal size pages, the allocation of main memory in page units.


Reference article:

Http://blog.sina.com.cn/s/blog_4692ea0a0101j4ss.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.