Operating system: Memory management (concept)

Source: Internet
Author: User

For computer systems, the operating system acts as a cornerstone, it is to connect the computer's underlying hardware and the upper application software bridge, control the operation of other programs, and management system related resources, while providing supporting system software support. For professional programmers, the knowledge of a certain operating system is not very small, because regardless of the face of the underlying embedded development, or the upper layer of cloud computing development, need to use a certain operating system-related knowledge.

    • What kinds of memory management are there?
    • What is the difference between segmentation and paging?
    • What is virtual memory
    • What is memory fragmentation what is inside fragments what is outer fragment
    • Virtual address logical address what is the difference between a linear address and a physical address?
    • What are the cache replacement algorithms

What kinds of memory management are there?

Common memory management methods are block-managed, page-managed, Segment-and section-page-managed.

(1) Block management: The main memory is divided into a large chunk, when the required program fragment is not in main memory to allocate a piece of memory space, the program fragment load into the storage, even if the required program fragment only a few bytes can only be assigned to it. This creates a lot of waste, with an average of 50% of memory wasted, but easy to manage.

(2) page management: The main memory is divided into one page, each page of space is much smaller than a piece of space, this method of space utilization is much higher than the block management

(3) Paragraph-style management: The main memory is divided into a paragraph, each section of space is much smaller than a page-by-page space, this method in space utilization is much higher than page management, but there is another disadvantage. A program fragment may be divided into dozens of segments, so that a lot of time will be wasted on calculating the physical address of each segment.

(4) section-by-page management: Combines the advantages of segment management and page-based management. The main memory is divided into several paragraphs, and each segment is divided into several pages. Section-page management to access 3 memory each time you take a protector.

What is the difference between segmentation and paging?

A page is a physical unit of information that is paged out to implement discrete allocations to reduce the amount of memory and increase the utilization of memory, or paging is simply due to the need for system administration rather than the needs of the user.

A segment is a logical unit of information that contains a set of information that is relatively complete in its meaning. The purpose of segmentation is to better meet the needs of users. The size of the page is fixed and determined by the system, the logical address is divided into page number and in-page address two parts, is implemented by the machine hardware, so that a system can only have one size of the page. The length of the segment is not fixed, it is decided by the user to write the program, usually by the compiler in the process of editing the source program, according to the nature of the information to be divided.

The paging job address space is one-dimensional, a single linear space where programmers can represent an address with just one mnemonic. The job address space of the segment is two-dimensional, when the programmer identifies an address, it needs to give both the segment name and the address in the paragraph.

What is virtual memory

Virtual memory abbreviation is a technology of computer system memory management. It is relative to physical memory and can be interpreted as "false" memory. It makes the application think it has contiguous available memory (a contiguous, full address space), allowing programmers to write and allow programs that are much larger in memory than the actual system, which enables many large software projects to be implemented on systems with limited memory resources. In fact, it is usually divided into multiple physical memory fragments, and some are temporarily stored on the external disk storage and exchanged for data when needed. The following benefits exist in virtual existence:

(1) Expand the address space. The address space is larger than the real deposit regardless of the paragraph-type virtual memory, the page-type virtual memory, or the page-type virtual memory.

(2) Memory protection. Each process runs in its own virtual memory address space and does not interfere with each other. In addition, virtual storage also provides write protection to specific memory addresses to prevent malicious tampering of code or data

(3) Fair allocation of memory. After the virtual memory is used, each process is equivalent to a virtual storage space with the size of the sun.

(4) When the process needs to communicate, it can be implemented by means of virtual storage sharing.

However, the use of virtual storage also has a price, mainly in the following aspects:
(1) The management of virtual storage needs to build many data structures, which occupy additional memory
(2) The conversion of virtual address to physical address, which increases the execution time of the instruction.
(3) The swap out of the page requires disk I/O, which is time consuming.
(4) If there is only a portion of the data on a page, memory will be wasted.

What is memory fragmentation? What is internal fragmentation? What is an outer fragment?

Memory fragmentation is due to multiple memory allocations, when memory allocation, memory format is generally: (User use segment) (the user segment), when the blank space is very small may not provide sufficient space for the user, may be sandwiched between the size of the blank segment is 5, and the user needs memory size of 6 , resulting in a lot of gaps resulting in reduced use efficiency, these small voids are called fragments.

Internal fragmentation: The storage space allocated to the program is not exhausted, some of it is not used by the program, but other programs can not use the space. Internal fragmentation is a block of storage within a region or inside a page, and the process that occupies those areas or pages does not use this block, and when the process occupies the block, the system cannot take advantage of it until the process releases it, or at the end of the process, the system can exploit the block.

Because the space is too small, the storage space that is too small to be allocated to any program (which does not belong to any process) is outside fragmentation. An external fragment is an idle block of storage that is in any allocated area or outside the page, and the synthesis of these blocks can meet the length requirements of the current request, but the system cannot meet the current request due to their address discontinuity or other reasons.

Internal and external fragments are a pair of contradictions, a specific memory allocation algorithm, it is difficult to solve both internal and external fragmentation problems, only according to the application of the characteristics of trade-offs ...

What is the difference between a virtual address, a logical address, a linear address, a physical address

A virtual address is an address that is generated by a program that consists of an offset address within a segment that is selected by the segment. The two-part address does not have direct access to physical memory, but rather to the corresponding physical memory address after the transformation of the segmented address.

The logical address refers to the offset address within the segment generated by the program. Sometimes the logical address is directly treated as a virtual address, and there is no clear boundary between the two.

A linear address is the middle tier between a virtual address and a physical address transformation, which is an address in a memory space (called a linear address space) that is addressable by the processor. The program code generates a logical address, or an offset address in a segment, and a linear address is generated by the corresponding segment base. If the paging mechanism is enabled, then the linear address can be transformed to produce a physical address. If the paging mechanism is not used, then the linear address is the physical address.

The physical address is the address signal that restricts the addressing physical memory on the external address bus of the CPU and is the final result of the address transformation.

The conversion method of virtual address to physical address is architecture-related, and there are two ways of segmenting and paging. In the case of X86CPU, segmented paging is supported. The Memory management Unit is responsible for conversions from virtual addresses to physical addresses. The logical address is the form of the segment identifier + offset within the segment, and the MMU (Memory Management unit RAM Management Unit) can convert the logical address to a linear address by querying the segment table. If the CPU does not turn on paging, then the linear address is the physical address, and if the CPU turns on paging, the MMU also needs to query the page table to convert the linear geology to Physical Address: Logical address (segment table)-"Linear address (page table)-" Physical address.

A mapping is a many-to-one relationship where different logical addresses can be mapped to the same linear address, and different linear addresses can be mapped to the same physical address. Also, the same linear address may be reloaded to another physical address after a page change occurs, so this many-to-one mapping will change over time.

What are the cache replacement algorithms

The data can be stored in the CPU or in memory. CPU processing is fast, but the capacity is small, memory capacity is large, but it is slow to transfer to CPU processing. The cache is needed for this to make a compromise. The most likely data is much faster when it is first transferred from memory to the CACHE,CPU and then read from the cache. However, the data stored in the cache is not 100% useful. The CPU's total read from the cache to useful data is called "hit."

The cache substitution algorithm has stochastic algorithm, FIFO algorithm, LRU algorithm, LFU algorithm and opt algorithm.

(1) random algorithm (RAND). Random algorithm is to use a random number generator to replace the block number to replace the block, the algorithm is simple, easy to implement, and it does not consider the cache block past, present and future usage. However, because there is no "historical information" used in the upper memory, and there is no local principle, the cache hit rate is not improved and the hit rate is lower.

(2) Advanced first-out (FIFO) algorithm. The FIFO (first OUT,FIFO) algorithm replaces the block of information that first enters the cache. FIFO algorithm according to the order of the cache to decide the elimination sequence, select the first to switch to the cache of the word block to replace, it does not need to record the use of the block, relatively easy to implement, low system overhead, The disadvantage is that some programs that need to be used frequently (such as the Loop program) can be replaced as the first to enter the cache, and there is no local principle to improve the cache hit rate. Because the first incoming information may be used later, or often used, such as the Loop program. This method is simple, convenient, the use of main memory "historical information", but can not say that the first entry is not often used, its shortcomings are not correct to reflect the principle of program local, hit rate is not high, there may be an abnormal phenomenon.

(3) Least recently Used (LRU) algorithm. The least recently used (Least recently USED,LRU) algorithm is to replace the information blocks in the least recently used cache. This algorithm is better than the FIFO algorithm, but this method can not guarantee that the past is not commonly used in the future.
The LRU algorithm is based on the usage of each block and always chooses the least recently used block to be replaced. Although this method is relatively good at reflecting the law of program locality, the substitution method needs to record the usage of the blocks in the cache at any time in order to determine which block is the least recently used block. The LRU algorithm is relatively reasonable, but it is more complicated to implement and the system overhead is large. It is often necessary to set up a hardware or software module, called a counter, for each block to record the situation in which it is used.

There are many ways to implement LRU policies. This paper introduces the method of counter method, register stack method and hardware logic comparison.

Counter method: Each chunk of the cache is set to a counter. The operating rules for the counter are as follows:

    • A block that is called or replaced, whose counter is clear 0, while the other counter adds 1
    • When access hits, the count value of all blocks is compared to the count value of the hit block, and if the count value is less than the count value of the hit block, the count value of the block is "1" and the value is unchanged if the block's count is greater than the hit Block's count value. Finally, the counter of hit block is cleared to "0".
    • When a replacement is required, the block with the largest count value is selected for substitution.

(4) The optimal replacement algorithm (opt algorithm). When using the optimal replacement algorithm (OPTimal replacement algorithm), a program must be executed first to count the cache replacement. With such a priori information, in the second execution of the program can be used in the most efficient way to replace, in order to achieve the optimal purpose.

The previous page substitution algorithm is mainly based on the historical information of page scheduling in main memory, it is assumed that in the future the page scheduling in main memory is the same as the page scheduling in the main memory for a period of time, obviously, this hypothesis is not always correct. The best algorithm is to choose the longest non-visited pages in the future as the replacement of the page, the replacement algorithm's hit rate must be the highest, it is the optimal replacement algorithm.

The only way to implement the OPT algorithm is to have the program do it first, recording the actual page address flow. Based on this page address stream, you can find the page that is currently being replaced. Obviously, it is unrealistic to do so. Therefore, the OPT algorithm is an idealized algorithm, but it is also a useful algorithm. In fact, this algorithm is often used as a criterion for evaluating the quality of other page replacement algorithms. In the case of other conditions, which is the closest to the OPT algorithm, which is the best one for the page replacement algorithm, then it is a better algorithm for page substitution.

(5) Least-use algorithm (LFU algorithm Least frequently used algorithm). Select the least visited page as the replaced page. Obviously, this is a reasonable algorithm, because the least used page so far is likely to be the least visited page in the future. The algorithm not only takes full advantage of the historical information of page scheduling in main memory, but also correctly reflects the local nature of the program. However, this algorithm is very difficult to implement, it is to set a very long counter for each page, and to select a fixed clock for each counter timer count. When you select the replaced page, you will find a counter with the largest counter value from all counters.

Operating system: Memory management (concept)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.