The principle of Linux memory management in depth understanding of Segment-style page __linux

Source: Internet
Author: User
Tags parent directory

Some time ago read the "deep understanding of the Linux kernel" to the memory management part of it spent a lot of time, but there are still a lot of questions that are not clear and have recently taken some time to review, documenting your own understanding and some of the ideas and perceptions of memory management in Linux.

I prefer to understand the development of a technology itself, in short, this is how the technology is developed, what technologies exist before this technology, what are the features of these technologies, why they are replaced by current technologies, and the current technology solves the problems of previous technologies. Clear these, we can more clearly grasp a certain technology. Some materials introduce a concept directly to the meaning of the concept, principles, and the development process and the principle behind it, as if this technology from the sky fell. In this case, it is the development of memory management to tell today's topic.

First, I have to elaborate on the topic of this article is the segmentation and paging technology in Linux memory management.

Let's take a look at history, in the early days of the computer, the program ran directly on the physical memory. In other words, it's the physical address that the program accesses during running. If this system runs only one program, so as long as the memory of this program does not exceed the physical memory of the machine will not be a problem, we do not need to consider the memory management of the trouble, anyway, you have a program, so little memory, eat not enough to eat it is your business. However, now the system is to support multitasking, multiple processes, so that CPU and other hardware utilization will be higher, this time we need to consider how the system's limited physical memory in a timely and efficient allocation to multiple programs, the thing itself is called memory management.

Here is an example of memory allocation management in an early computer system for easy understanding.

If we have three programs, program A,B,C, program a running process requires 10M of memory, program B to run the process of need 100M memory, and program C running process needs 20M memory. If the system needs to run programs A and b at the same time, the early memory management process is probably the same as allocating the first 10M of physical memory to a, and the next 10m-110m is assigned to B. This method of memory management is relatively straightforward, well, let's say we want program C to run at this time, and assuming that our system has only 128M of memory, it is obvious that the program C is not able to run because of insufficient memory. You know you can use the technology of virtual memory, memory space can not be used to exchange data to disk space, has reached the purpose of expanding the memory space. Let's take a look at some of the more obvious problems with this memory management approach. As the article first mentioned, it is very important to grasp a technology to understand its development process.

1. Process address space cannot be isolated

Because the program accesses the physical memory directly, this time the memory space used by the program is not isolated. For instance, as mentioned above, the address space of a is 0-10m in this range, but if a section of code in a is manipulating the data in the address space of 10m-128m, then program B and program C are likely to crash (each program can access the entire address space of the system). Such a lot of malicious programs or Trojans can easily break other programs, the security of the system will not be protected, which is also intolerable to users.

2. Low efficiency of memory usage

As mentioned above, if we want to run programs A, B, C at the same time, the only way is to use virtual memory technology to write some programs temporarily unused data to disk, and then read the memory from the disk when needed. Here program C to run, the a switched to disk is obviously not, because the program is the need for continuous address space, program C requires 20M of memory, and a only 10M space, so need to switch program B to disk, and B full 100M, You can see that in order to run program C we need to write 100M of data from memory to disk, and then when program B needs to run and then read from disk to memory, we know that IO operation is more time-consuming, so the process efficiency will be very low.

3. The address of the program running cannot be determined

Every time a program needs to run, it needs to allocate a large enough free area in memory. The problem is that the idle position is not certain, which will bring some relocation problems, relocation of the problem to determine the program is the reference to the variables and functions of the address, if there is no understanding of children's shoes can be compiled to look at the data.

Memory management is nothing more than to solve the above three problems, how to make the process of address space isolation, how to improve the use of memory efficiency, how to solve the program when the relocation problem.

Here's a quote from the computer world: "Any problem in a computer system can be solved by introducing a middle tier." ”

The current approach to memory management is to introduce the concept of virtual memory between the program and the physical memory. Virtual memory is located between program and physical memory, the program sees only virtual memory, and no longer has direct access to physical memory. Each program has its own separate process address space, so that process isolation is done. The process address space here refers to the virtual address. As the name suggests, since it is a virtual address, that is, not the real existence of the address space.

Now that we've added a virtual address between the program and the physical address space, we need to figure out how to map to the physical address from the virtual address, because the program is definitely running in physical memory, with two techniques, including segmentation and paging.

Segmentation (Segmentation): This method is one of the most popular methods, the basic idea is to map the memory of the program needs the size of the virtual space to a physical address space.

Segment mapping mechanism

Each program has its own virtual independent process address space, and you can see that the virtual address space for both programs A and B starts from 0x00000000. We will map two of the same virtual address space and the actual physical address space one by one mapping, that is, each byte in the virtual address space corresponds to each byte in the actual address space, the mapping process is set by the software mapping mechanism, the actual conversion by the hardware to complete.

The mechanism of this segmentation solves the problem of process address space isolation and program address relocation in the 3 problems mentioned at the beginning of the article. Program A and program B have their own virtual address space, and the virtual address space is mapped to each other not overlapping physical address space, if program a access to the virtual address space is not within the scope of 0x00000000-0x00a00000, then the kernel will reject the request, So it solves the problem of isolating the address space. Our application A only needs to care about its virtual address space 0x00000000-0x00a00000, and it is mapped to which physical address we do not care, so the program always according to this virtual address space to place variables, code, do not need to reposition.

In any case, the segmentation mechanism solves the above two problems and is a big step forward, but there is still nothing to do about memory efficiency problems. Because this memory-mapping mechanism is still in the program, and when memory is low, the entire program still needs to be switched to disk, so that the efficiency of memory usage is still very low. So, how to calculate the efficient memory usage. In fact, according to the local operating principle of the program, a program in the process of running, within a certain time period, only a small number of data will be used frequently. So we need a smaller granularity of memory segmentation and mapping methods, at this time will think of Linux buddy algorithm and slab memory allocation mechanism, haha. Another method of translating virtual addresses into physical addresses has emerged as a paging mechanism.

Paging mechanism:

The paging mechanism is to divide the memory address space into a number of small fixed-size pages, the size of each page is determined by memory, just as the Ext file system in Linux partitions the disk into several blocks, and this is done to improve memory and disk utilization. Imagine, if the disk space into n equal parts, each size (a block) is 1M, if I want to store the file on disk is 1K bytes, then the remaining 999 bytes is not wasted. So a finer granularity of disk partitioning is required, we can set the block a little bit, which is of course based on the size of the file stored in a comprehensive consideration, seems a bit off the point, I just want to say, in-memory paging mechanism in the Ext file system with the disk segmentation mechanism is very similar.

The size of the General page in Linux is 4KB, we divide the address space of the process by page, load the commonly used data and code page into memory, and the infrequently used code and data are saved on disk, we still use an example to illustrate the following figure:

Page map relationships between process virtual address space, physical address space, and disk

We can see that the virtual address space of process 1 and process 2 is mapped to the discontinuous physical address space (which is very significant, if one day our continuous physical address space is not enough, but a lot of discontinuous address space, without this technology, our program will not be able to run, Even they share a portion of the physical address space, which is shared memory.

The virtual pages of process 1 VP2 and VP3 are swapped to disk, and the Linux kernel generates a page-fault exception when the program needs two pages, and then the exception manager reads it into memory.

This is the principle of paging mechanism, of course, the implementation of the paging mechanism in Linux is still more complex, through the page global catalog, page Superior directory, page intermediate table of Contents, page tables and other levels of paging mechanism to achieve, but the basic principle of work will not change.

The implementation of paging mechanism requires hardware implementation, the hardware name is called MMU (Memory Management unit), he is specifically responsible for the conversion from the virtual address to the physical address, that is, from the virtual page to find the physical page.




This blog post refers to the country-embedded video and http://www.cnblogs.com/image-eye/archive/2011/07/13/2105765.html, thanks to the author.

I. Type of address

Physical Address: CPU through address bus select、read addressing, find the real physical memory corresponding address .

Logical Address: The program code is compiled and appears in the assembler in an address.

Linear address (virtual address): Under the 32-bit CPU architecture, can represent the 4G address space, with a 16-in-system representation is 0x00000000---0Xffff ffff

Relations between them.


second, Segment management, page-style management 2.1-Segment Management 2.2.1 Segment Management (16-bit CPU)

The 16-bit CPU has a 20-bit address line, its addressing range is 2^20 that is 1M of memory space, but 16-bit CPU for the address of the register (IP,SP) only 16 bits, that is, only 64K memory space access.

How to use the 16 address register to access 1 m memory space .

In order to be able to access 1M of memory space, the CPU uses the memory segment management mode, and has added the segment register inside the CPU. The 16-bit CPU divides the 1 m memory space into several logical segments, each of which requires the following:

1, the starting address of the logical segment (segment address) must be 16 times times the number, that is, the last 4 binary must be all 0

2, the maximum capacity of the logical segment is 64K (why. Because 16-bit CPU holds address registers only 16 bits

How physical addresses are formed:

Because the segment address must be 16 times times the number, so the value of the general form is xxxx0h, that is, the first 16-bit binary is changed, the latter four is fixed 0, given the characteristics of the segment address, you can save the first 16 bits bits to save the entire Janki address , So every time you use a segment register multiplied by 16 to get the actual segment address.

Logical address = Janki Address: Offset address

Where: The Janki address is saved to the segment register, and the offset address is saved in another register

Linear address = segment Base Address *16+ offset 2.2.2 Segment Management (32-bit CPU)

Two ways to work on 32-bit CPUs: from real mode and protected mode .

1, Real mode: memory management and 16-bit CPU is the same.

2, Protection mode: (General X86 operation mode)

Janki address up to 32 bits, the maximum capacity of each segment up to 4G, segment register value period address of the " selector"(Selecor), with the "selector" from memory to get a 32-bit address, storage unit of

Physical Address = This segment base address (base) + segment offset (offset)


2.2-page management (paging management)

Concept

1. Linear Address page: From the perspective of management and efficiency, linear addresses are divided into fixed-length groups called pages (page). For example, 32-bit machine, linear address maximum can be 4G, if 4KB for page capacity, so that the linear address divided into 2^20 pages.

2, physical page: Another kind of "page", called "physical page", or "page frame, page frames." The paging unit divides all physical memory into a fixed-length management unit, which is generally the same length as the linear address page.

How to map between the two. Implement mappings through page management.



The specific process of page management:


Description

1. In the paging unit, the address of the page directory is placed in the CPU's CR3 register, which is the starting point for address translation.

2, each process, has its own virtual address space, run a process, first of all need to put its page directory address to the CR3 register, the other process to save.

3, each 32-bit linear address is divided into three parts: Page Directory index (10 bits): Page table index (10 bits): offset (12 bits)

The following are the steps for address translation:

The first step: Loading the page directory address of the process (the operating system in the scheduling process, the address into the CR3)

Second step: According to the first ten linear address, in the page directory, find the corresponding index entry is the page table address.

Step three: According to the median 10 digits of the linear address, in the page table, find the corresponding index entry that is the starting address of the page.

Step Fourth: Add the starting address of the page to the last 12 digits of the linear address, and wait for the physical address.

1, such a level two mode supports addressing the physical address space of 4G. Why.

Support. Because the page directory supports addressing 2^10 page tables, each page table supports addressing 2^10 pages, each of which consists of 2^12=4kbyte, 2^10* 2^10*2^12=4gbyte content.

2. The page capacity of the physical address of the above figure is much larger. What is the decision.

In level two mode, page capacity is determined by linear address [bit11:0], page capacity =2^12=4kbyte.

according to the above section management and paging management, draw the following figure


The figure originates from/www.cnblogs.com/image-eye/archive/2011/07/13/2105765.html blog

third, Linux memory management

The design of the Linux kernel does not fully adopt the segment mechanism provided by Intel, but only the finite degree uses the segmented mechanism. This not only simplifies the design of the Linux kernel, but also creates conditions for porting Linux to other platforms because many RISC processors do not support the segment mechanism.

Why is the Linux kernel Design Limited use of segmentation mechanism.

Because the Linux kernel in memory management: all segments of the base address is 0, that is, the logical address of each segment is consistent with the linear address (that is, the logical address of the offset value and linear linear address value is the same), and completed the use of paging mechanism.

I386 's Level two page management architecture is described earlier, with some CPUs using level three or level four architectures. The linux2.6.29 kernel provides a unified interface for each CPU, a level four page management architecture that is compatible with CPUs of level two three level four management architectures. The level four architecture is described below:


which

1. Page Global Directory: The abstract top level of the Multilevel Page table 2, the page parent directory (page Upper directory): PUD

3, Page Middle directory (page Middle directory): The PMD page Table of the middle tier

4. Page table (entry):p te

5, page: That is, the specific physical address



1. Virtual address, physical address, logical address, linear address

The virtual address is also called a linear address. Linux does not adopt a segmentation mechanism, so the logical address and virtual address (linear address) (in the user state, the kernel state logical address refers to the following linear offset of the address) is a concept. Physical address from needless to mention. The kernel's virtual address and physical address, most of which are only one linear offset. The virtual address and physical address of user space are mapped by a multilevel page table, but they are still called linear addresses.

2. Dma/high_mem/nromal Division

In the x86 structure, the Linux kernel virtual address space is divided into 0~3g for user space, 3~4g for kernel space (note that the kernel can use a linear address of only 1G). The kernel virtual space (3G~4G) is divided into three different types of zones:
16MB after the start of the ZONE_DMA 3G
Zone_normal 16MB~896MB
Zone_highmem 896MB ~1g

Because the kernel's virtual and physical address is only one offset: Physical address = logical address –0xc0000000. So if the 1G kernel space is fully used for linear mapping, it is obvious that physical memory can only be accessed to the 1G range, which is obviously unreasonable. Highmem is to solve this problem, specially developed a piece of unnecessary linear mapping, you can flexibly customize the mapping to access more than 1G of physical memory area. A picture is deducted from the Internet

High-end memory division, and the following figure,

The kernel directly maps space Page_offset~vmalloc_start,kmalloc and __get_free_page () are assigned the pages here. The two are the use of slab allocator, the direct allocation of physical pages and then converted to logical address (physical address continuous). Suitable for allocating small segments of memory. This area contains resources such as kernel mirroring, physical page frame mem_map, and so on.

Kernel Dynamic mapping space vmalloc_start~vmalloc_end, which is used by Vmalloc, can represent large space.

Kernel Permanent mapping space pkmap_base ~ FIXADDR_START,KMAP

Kernel Temporary mapping space fixaddr_start~fixaddr_top,kmap_atomic

3. Partner algorithm and Slab allocator

The partner Buddy algorithm solves the problem of external fragmentation. The kernel manages the available pages in each zone area and queues them into a list of 2 power (order) sizes, stored in the Free_area array.

The specific buddy management is based on bitmaps, and its algorithm for allocating reclaimed pages is described as follows,

Buddy algorithm For example, described:

Suppose our system memory has only 16 page RAM. Because there are only 16 pages of RAM, we only need a four-level (orders) partner bitmap (because the maximum contiguous memory size is 16 pages), as shown in the following figure.

Order (0) Bimap has 8 bit bits (page up to 16

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.