Linux Memory Management A __linux

Source: Internet
Author: User
Tags data structures memory usage

Download Address: http://www.kerneltravel.net/journal/v/mem.htm


absrtact : This chapter first examines the process memory management of Linux from the perspective of application developers, and builds on the kernel to discuss the system physical memory management and kernel memory usage. Strive to from outside to inside, and naturally guide the user analysis of Linux memory management and use. At the end of this chapter, we give an example of memory mapping to help users understand the relationship between kernel memory management and user memory management, and hope that we can finally harness Linux memory management. Preface

Memory management has always been the focus of all operating system books, whether in the marketplace or online, with a large amount of textbooks and materials involved in memory management. Therefore, we are here to write the Linux memory management to take evasive strategy, from the theoretical level does not go to swim, laughable. What we want to do and probably do is to talk about the understanding of memory management from the developer's point of view, and the ultimate goal is to share our experience of using memory in kernel development with the knowledge of Linux memory management.

Of course, we will also involve some basic theory of memory management, such as paragraph page, but our aim is not to emphasize the theory, but to guide the understanding of the practice of development, so just go to the point, do not delve.

Following the doctrine of "theory comes from practice", we don't have to drill into the kernel to see how the system's memory is managed in the first minute, and that tends to get you into a indefinitely dilemma (I made this mistake that year). )。 So the best way is to first from the external (User programming category) to observe how the process of using memory, wait until the memory of the use of a more intuitive understanding, and then into the kernel to learn how memory is managed and other theoretical knowledge. Finally, an example is programmed to digest what is being said. Processes and Memory How the process uses memory.

There is no doubt that all processes (executing programs) must occupy a certain amount of memory, either for storing the program code that is loaded from the disk, or for storing data from the user input, and so on. However, the process of management of these memory because of the different memory uses, some of the memory is statically allocated and unified recovery, while others are dynamically allocated and recycled as needed.

For any ordinary process, it involves 5 different segments of data. A friend with a little programming knowledge can think of these data segments contain "program code snippet", "Program Data segment", "Program stack segment" and so on. Yes, these data segments are all in it, but in addition to the above several data segments, the process also contains two types of data segments. Here's a quick summary of the 5 different data areas that are included in the memory space of the process.

Code Snippet: A code snippet is an operation instruction that holds an executable file, that is, it is a mirror image in the execution program in memory. Code snippets need to be prevented from being illegally modified at run time, so only read operations are allowed, not write (modify) operations--it is not writable.

Data Segment : The data segment is used to hold the global variables initialized in the executable file, in other words, the variables and global variables that hold the program statically allocated [1].

BSS segment [2]: The BSS segment contains the uninitialized global variables in the program, and the BSS segments are all zero in memory.

Heap (heap): A heap is used to store the dynamically allocated memory segment of a process running, and its size is not fixed and can be dynamically expanded or reduced. When a process calls a function such as malloc allocates memory, the newly allocated memory is dynamically added to the heap (the heap is expanded), and the freed memory is removed from the heap (heap is reduced) when the memory is freed by functions such as free.

stacks : Stacks are local variables that are temporarily created by the user's hosting program, that is, the variables defined in the parentheses "{}" of our function (but not the variables of the static declaration, static means holding the variable in the data segment). In addition, when a function is invoked, its arguments are pressed into the process stack that initiated the call, and the return value of the function is stored back into the stack when the call is finished. Because of the advanced first out of the stack, the stack is particularly handy for saving/resuming call scenes. In this sense, we can think of the stack as a memory area that registers and swaps temporary data. How processes organize these areas.

The data segments, BSS, and heaps in these memory areas are usually continuously stored--memory locations are contiguous, and code snippets and stacks tend to be stored independently. Interestingly, the heap and stack two area relationships are very "ambiguous", they are a downward "long" (I386 architecture in the stack downward, heap up), an upward "long", relatively raw. But you don't have to worry that they'll meet because there's a lot of spacing between them (and you can calculate from the example below) that there's very little chance of getting together.

The following figure briefly describes the distribution of the process memory area:

"Facts speak louder than words," we use a small example (the prototype is taken from "User-level Memory Management") to show the differences and locations of the various memory areas mentioned above.

#include <stdio.h>

#include <malloc.h>

#include <unistd.h>

int Bss_var;

int data_var0=1;

int main (Intargc,char **argv)

{

printf ("Below are addresses of types of process ' s mem\n");

printf ("Text location:\n");

printf ("\taddress of Main" (Code Segment):%p\n ", main);

printf ("______________\n");

int stack_var0=2;

printf ("Stack location:\n");

printf ("\tinitial End of Stack:%p\n", &stack_var0);

int stack_var1=3;

printf ("\tnew End of Stack:%p\n", &stack_var1);

printf ("______________\n");

printf ("Data location:\n");

printf ("\taddress of Data_var (data Segment):%p\n", &data_var0);

static int data_var1=4;

printf ("\tnew End of Data_var (data Segment):%p\n", &data_var1);

printf ("______________\n");

printf ("BSS location:\n");

printf ("\taddress of Bss_var:%p\n", &bss_var);

printf ("______________\n");

Char *b = SBRK ((ptrdiff_t) 0);

printf ("Heap location:\n");

printf ("\tinitial End of Heap:%p\n", b);

BRK (B+4);

B=SBRK ((ptrdiff_t) 0);

printf ("\tnew End of Heap:%p\n", b);

return 0;

}

Its results are as follows

Below are addresses of types of process ' s MEM

Text Location:

Address of Main (Code Segment): 0x8048388

______________

Stack Location:

Initial End of STACK:0XBFFFFAB4

New end of Stack:0xbffffab0

______________

Data Location:

Address of Data_var (data Segment): 0x8049758

New End of Data_var (data Segment): 0x804975c

______________

BSS Location:

Address of bss_var:0x8049864

______________

Heap Location:

Initial End of heap:0x8049868

New End of heap:0x804986c

Use the size command to see the size of each segment of the program, such as the execution size example will get

Text data BSS Dechex filename

1654 280 8 1942 796 Example

However, these data are static statistics for program compilation, and the above shows the dynamic values of the process at run time, but the two correspond.

In the previous example, we have a peek at the logical memory distribution used by the process. This part we continue to go into the operating system kernel to see how the process of memory specifically how to allocate and manage.

From the user to the kernel, the memory representation will go through the "logical address"--"linear address"--"Physical address" in several forms (explanations for several addresses are described earlier). The logical address is transformed into a linear address, and the linear address is transformed into a physical address by a page mechanism. (but we want to know that although the Linux system retains a segment of the mechanism, but all the program's segment address is fixed to 0-4g, so although the logical address and linear address are two different address space, but in Linux logical address is equal to the linear address, their values are the same). Along this line, the main problem we are studying is focused on the following issues.

1. How the process space address is managed.

2. How the process address is mapped to physical memory.

3. How physical memory is managed.

And some of the child problems raised by the above problems. such as system virtual address distribution, memory allocation interface, continuous memory allocation and discontinuous memory allocation, etc.

Process Memory Space

The Linux operating system uses virtual memory management technology, so that each process has its own process address space that is not interfering with each other. The space is a linear virtual space with a block size of 4G, and the user sees and touches the virtual address and cannot see the actual physical memory address. This virtual address not only protects the operating system (the user cannot directly access physical memory), but more importantly, the user program can use a larger address space than the actual physical memory (see the hardware base for specific reasons).

Before discussing the details of the process space, here are a few questions to clarify:

L First, 4G process address space is artificially divided into two parts-user space and kernel space. User space from 0 to 3G (0xc0000000), kernel space occupies 3G to 4G. User processes typically have access to only virtual addresses of user space and cannot access kernel space virtual addresses. Kernel space can be accessed only when the user process makes system calls that represent user processes in kernel state execution.

l Second, user space corresponding process, so whenever the process switch, user space will change, and kernel space is the kernel responsible for mapping, it does not follow the process changes, is fixed. The kernel space address has its own page table (INIT_MM.PGD), and the user process has a different page table.

L Third, the user space of each process is completely independent and irrelevant. If you don't believe it, you can run the above program 10 times at the same time (of course, to run at the same time, let them sleep together for 100 seconds before returning), you will see that 10 processes occupy the same linear address.

Process Memory Management

The object of process memory management is the memory mirror on the process linear address space, which is actually the virtual memory area (memory region) that the process uses. The process virtual space is a 32-or 64-bit "flat" (independent contiguous interval) address space (the exact size of the space depends on the architecture). It is not easy to manage such a large flat space, and for ease of management, virtual space is divided into memory areas that are variable in size (but must be multiples of 4096), which are arranged in a process linear address like a parking space. The partitioning principle of these areas is "storing the address space consistent with the Access attribute", and the so-called access attribute here means "readable, writable, executable, etc.".

If you want to see the memory area occupied by a process, you can use the command cat/proc/<pid>/maps to obtain (PID is the process number, you can run the example shown above ——./example &;p ID will print to the screen). You can find a lot of digital information similar to the following.

Because the program example uses a dynamic library, in addition to the memory area that the example itself uses, it also contains the memory areas that are used by the dynamic libraries (the regional order is: code snippets, data segments, BSS segments).

We only extract information related to example, except for the first two lines of code snippet and data segment, the last line is the stack space used by the process.

-------------------------------------------------------------------------------

08048000-08049000 R-xp 00000000 03:03 439029/home/mm/src/example

08049000-0804a000 rw-p00000000 03:03 439029/home/mm/src/example

...............

bfffe000-c0000000 rwxpffff000 00:00 0

--------------------------------------------------------------------------------------------------------------- -------

Each row of data is formatted as follows:

(memory area) Start-end access offset main device number: Secondary device number I node file.

Note that you will surely find that the process space contains only three memory regions, there seems to be no heap, BSS, etc. mentioned above, which is not the case, the memory area in the program memory segment and the process address space is a kind of fuzzy correspondence, that is, the heap, BSS, data segment (initialized) are represented in the process space by the data segment memory region.

In the Linux kernel, the data structure of the corresponding process memory area is: vm_area_struct, the kernel manages each memory area as a separate memory object, and the corresponding operations are consistent. The object-oriented approach enables VMA structures to represent multiple types of memory areas-such as memory-mapped files or user-space stacks of processes-and the operations of these areas are also different.

Vm_area_strcut structure is more complex, please refer to the relevant information about its detailed structure. Here we only make a few additional explanations for its organizational approach. Vm_area_struct is the basic snap-in that describes the process address space, which often requires multiple memory regions to describe its virtual space and how to correlate these different memory regions. You may all think of using a linked list, but indeed the vm_area_struct structure is linked in a linked list, but for convenience, the kernel organizes areas of memory in the form of a red-black tree (the previous kernel uses a balanced tree) to reduce the time it takes to search. Two forms of organization that coexist are not redundancy: a linked list is used when it is necessary to traverse all nodes, whereas a red-black tree is suitable for locating specific memory areas in the address space. The kernel can achieve high performance for a variety of different operations on the memory area, so both data structures are used at the same time.

The following figure reflects the management model of the process address space:

The address space for a process corresponds to the description structure of the memory descriptor structure, which represents the entire address space of the process, which contains all the information about the process address space, including, of course, the memory area of the process. process memory allocation and recycling

Process-related operations such as creating a process fork (), Program Load Execve (), Mapping file Mmap (), Dynamic memory allocation malloc ()/BRK () need to allocate memory to the process. However, the process of application and acquisition is not the actual memory, but virtual memory, exactly, "memory area." The allocation of a process to an area of memory will eventually come down to the Do_mmap () function (BRK calls are implemented individually with system calls, without Do_mmap ()),

The kernel uses the Do_mmap () function to create a new linear address range. But it's not very accurate to say that the function creates a new VMA, because if you create an address range that is adjacent to an existing address range, and they have the same access, then two intervals will be merged into one. If you can't merge, then you really need to create a new VMA. However, in either case, the Do_mmap () function adds an address range to the process's address space-whether it extends an existing memory region or creates a new zone.

Similarly, releasing an area of memory should use the function Do_ummap (), which destroys the corresponding memory area. How to change from virtual to solid.

from the above you have seen that the process can directly manipulate the address is a virtual address. When the process requires memory, the kernel gets only the virtual memory area, not the actual physical address, and the process does not get the physical memory (physical page-page concept please refer to the hardware base chapter) and get only the right to a new linear address range. Actual physical memory only when the process is really accessing the newly acquired virtual address will the "page fault" exception be generated by the "request pages mechanism", thus getting into the routine of allocating the actual page.

The exception is the basic guarantee of the virtual memory mechanism-it tells the kernel to actually allocate the physical page for the process and create the corresponding page table, after which the virtual address is actually mapped to the physical memory of the system. (Of course, if the page is swapped out to disk, it will also produce a page fault exception, but no longer need to create a table)

This request-page mechanism defers the allocation of the page until it is no longer postponed and is not in a hurry to get everything done at once (this kind of thinking is a bit like a proxy pattern in design mode). The reason why you can do this is to take advantage of the "local principle" of memory access, the benefit of the request page is to save the free memory, improve the system throughput rate. To get a clearer picture of the request-page mechanism, look at the deep understanding of the Linux kernel.

Here we need to explain the nopage operation on the memory area structure. When the accessed process virtual memory does not actually allocate the page, the action is called to allocate the actual physical page and to create a page table entry for the page. In the final example we will show how to use this method.

System physical Memory management

Although the object of an application operation is virtual memory mapped to physical memory, the processor is directly manipulating physical memory. So when an application accesses a virtual address, the virtual address must first be translated into a physical address before the processor resolves the address access request. Address conversion work needs to be completed by querying the page table, in general, address translation needs to fragment the virtual address so that each virtual address is used as an index to the page table, and the page table entry points to the next level of the page table or points to the final physical page.

Each process has its own page table. The PGD domain of the process descriptor points to the page global catalog of the process. Here we use a picture in the Linux device driver to look at the transition relationship between the process address space and the physical page.

The process above is simple to say and difficult to do. Because a physical page must be assigned before the virtual address is mapped to a page-that is, you must first get the free page from the kernel and create the page table. The following is a brief introduction to the kernel's mechanism for managing physical memory.

Physical Memory Management (page management)

The Linux kernel manages physical memory through a paging mechanism that divides the entire memory into pages of the size of 4k (in the i386 architecture), thus allocating and reclaiming the basic unit of memory as a memory page. Paging management helps to allocate memory addresses flexibly, because there is no need to require large chunks of contiguous memory [3] to be allocated, and the system can make up the required memory for the process to be used on the east and West pages. However, in practice, the system tends to allocate contiguous chunks of memory when it uses memory, because the page table does not need to be changed to allocate contiguous memory, thereby reducing the refresh rate of the TLB (frequent refreshes can greatly reduce the access speed).

In view of the above requirements, the kernel allocates physical pages in order to minimize discontinuity, using a "partner" relationship to manage the free pages. Partnership allocation algorithm Everyone should be familiar--almost all operating system books will mention, we do not go into the details of it, if you do not understand can refer to the relevant information. All we need to know is that the organization and management of free pages in Linux leverages partnerships, so free page allocations also need to follow a partnership, with the smallest unit being only 2 Power times page size. The basic function of allocating free pages in the kernel is get_free_page/get_free_pages, which either assigns a single page or assigns a specified page (2, 4, 8 ...). 512 pages).

Note: Get_free_page allocates memory in the kernel, unlike malloc in user space, malloc dynamically allocating by heap, which in effect calls BRK () system calls that expand or shrink the process heap space (it modifies the BRK domain of the process). If the existing memory area is not enough to hold the heap space, it expands or shrinks the corresponding memory area in multiples of the page size, but the BRK value is not modified in multiples of the page size but is actually requested. Therefore, malloc allocates memory in user space, which can be allocated in bytes, but the kernel will still be allocated internally in page units.

In addition, it is necessary to mention that the physical page is described in the system by the page structure, struct page, where all pages are stored in the array mem_map[], through which each page in the system can be found (either idle or not idle). The free pages can be indexed by the Partnership's free Page list (Free_area[max_order]) mentioned above.

Kernel Memory Usage

Slab

The so-called ruler has a long, inch short. allocating memory in the smallest unit of the page is really convenient for the physical memory in the kernel management system, but most of the memory used by the kernel itself is often small (much less than a page) of memory--such as storing file descriptors, process descriptors, virtual memory region descriptors, and so on, which require less memory than one page. The memory used to hold the descriptor is like bread crumbs and bread compared to the pages. Multiple of these small chunks of memory can be aggregated in an entire page, and these small chunks of memory are generated/destroyed as frequently as breadcrumbs.

To meet the kernel's need for this small block of memory, the Linux system employs a technology called the slab allocator. The implementation of slab allocator is quite complex, but the principle is not difficult, its core idea is "storage pool [4]" application. Memory fragments (small chunks of memory) are treated as objects that, when used, are not released directly but are cached in the storage pool for the next use, which undoubtedly avoids the extra load of frequent creation and destruction of objects.

Slab technology not only avoids the inconvenience of internal memory fragmentation (explained below) (the main purpose of introducing the slab allocator is to reduce the number of calls to the partner system allocation algorithm--Frequent allocation and recycling will inevitably lead to memory fragmentation--it is difficult to find chunks of contiguous usable memory), It also makes it possible to use hardware caching to improve access speed.

Slab is not a memory allocation that exists independently of the partnership, Slab is still based on the page, in other words, slab the page (from the Partnership management of the free page list) into a number of small memory blocks for allocation, slab object allocation and destruction using KMEM_ Cache_alloc and Kmem_cache_free.

Kmalloc

The slab allocator is used not only to store kernel-specific structures, but also to process kernel requests for small chunks of memory. Of course, in view of the characteristics of the slab allocator, the request for small chunks of memory less than one page in a kernel program is generally done through the interface Kmalloc provided by the slab allocator (although it can allocate 32 to 131072 bytes of memory). From the point of view of kernel memory allocation, Kmalloc can be viewed as an effective complement to get_free_page (s), with a more flexible memory allocation granularity.

If you are interested, you can find all kinds of slab information statistics that the kernel performs field use in/proc/slabinfo, in which you will see the use information of all slab in the system. From the information you can see that in addition to the slab used in the system, there are a lot of slab prepared for kmalloc (some of which are prepared by DMA).

kernel non-contiguous memory allocation (VMALLOC)

 

Partnership or slab technology, from the memory management theory point of view is basically consistent, they are to prevent "fragmentation", but fragmentation is divided into external and internal fragmentation of the said, the so-called internal fragmentation is to say that the system to meet a small segment of memory (continuous) needs, Had to allocate a large area of contiguous memory to it, resulting in space waste; external fragmentation refers to the system has enough memory, but it is fragmented fragmented, unable to meet the large block of "continuous memory" needs. Any fragmentation is an obstacle to the efficient use of memory by the system. The slab allocator allows a large number of small chunks of memory within a page to be allocated and used independently, avoiding internal fragmentation and saving free memory. Partnership to manage the memory block by size, to some extent, reduce the harm of external fragmentation, because the page box allocation is not blind, but in order according to the size, but the partnership only to reduce the external fragmentation, but not completely eliminated. You'll have to make a few gestures. The rest of the free memory after multiple allocation of pages.

So the final idea of avoiding the external fragmentation is still how to use the discontinuous blocks of memory to be combined into "seemingly large chunks of memory"-a situation similar to the user-space allocation of virtual memory, which is logically contiguous and is mapped to physical memory that is not necessarily contiguous. The Linux kernel borrows this technology to allow kernel programs to allocate virtual addresses in the nuclear address space, as well as the page table (the kernel page table) to map virtual addresses to distributed pages of memory. This perfectly solves the problem of external fragmentation in kernel memory usage. The kernel provides the VMALLOC function to allocate kernel virtual memory, which, unlike Kmalloc, allocates a much larger memory space than Kmalloc (which can be larger than 128K but must be a multiple of page size), but compared to Kmalloc, Vmalloc need to remap the kernel virtual address, the kernel page table must be updated, so the allocation efficiency is lower (space for time)

Similar to the user process, the kernel also has a mm_strcut structure called INIT_MM that describes the kernel address space, where the page table entry Pdg=swapper_pg_dir contains the mapping relationship of the system kernel space (3G-4G). Therefore Vmalloc allocating kernel virtual addresses must update the kernel page table, and Kmalloc or get_free_page because of the contiguous memory allocated, there is no need to update the kernel page table.

Vmalloc allocated kernel virtual memory and Kmalloc/get_free_page allocated kernel virtual memory are in different intervals and do not overlap. Because the kernel virtual space is partitioned management, do their own duties. Process space address distribution from 0 to 3G (in fact, to Page_offset, in 0x86 it equals 0xc0000000), from 3G to vmalloc_start this address is the physical memory mapped area (this zone contains kernel mirrors, physical page tables Mem_ Map and so on) For example, I use the system memory is 64M (can see with free), then (3g--3g+64m) This piece of memory should be mapped to physical memory, and Vmalloc_start position should be near 3g+64m (say "nearby" Because there will be a gap of 8M size between the physical memory mapped area and the Vmalloc_start, the vmalloc_end position is close to 4G ("close" because the last position system retains a 128k size area for the dedicated page map, There may also be a high-end memory-mapped area, these are the details, here we do not do entanglement.

The image above is a fuzzy contour of the memory distribution

The contiguous memory allocated by the Get_free_page or Kmalloc functions is trapped in the physical mapping area, so the kernel virtual address and physical address they return are just one offset (page_offset), which you can easily convert into physical memory addresses. The kernel also provides the Virt_to_phys () function to convert the physical map address in the kernel virtual space into a physical address. You know, the address in the physical memory map area is in order with the kernel page table, and each physical page in the system can find its corresponding kernel virtual address (in the physical memory map area).

The address of the Vmalloc distribution is limited to Vmalloc_start and Vmalloc_end. Each Vmalloc allocated kernel virtual memory corresponds to a VM_STRUCT structure (which can be confused with vm_area_struct, which is the structure of the process virtual memory area), and the different kernel virtual addresses are separated by 4k size of free space to prevent the crossing-see the figure below. As with the characteristics of process virtual addresses, these virtual addresses do not have a simple displacement relationship with physical memory and must be converted to physical addresses or physical pages through a kernel page table. They may not yet be mapped, and the physical pages are actually allocated when there are missing pages.

Here is a small program to help you identify the areas of the above distribution functions.

#include <linux/module.h>

#include <linux/slab.h>

#include <linux/vmalloc.h>

unsigned char*pagemem;

unsigned char*kmallocmem;

unsigned char*vmallocmem;

int init_module (void)

{

Pagemem = get_free_page (0);

PRINTK ("<1>pagemem=%s", Pagemem);

Kmallocmem = Kmalloc (100,0);

PRINTK ("<1>kmallocmem=%s", Kmallocmem);

Vmallocmem = Vmalloc (1000000);

PRINTK ("<1>vmallocmem=%s", Vmallocmem);

}

void Cleanup_module (void)

{

Free_page (PAGEMEM);

Kfree (KMALLOCMEM);

Vfree (VMALLOCMEM);

}

instance

Memory Mapping (MMAP) is a great feature of the Linux operating system, which maps system memory to a file (device) so that access to memory can be achieved by accessing the contents of the file. The greatest benefit of this is improved memory access, and the use of File system interface programming (devices in Linux as a special file processing) access to memory, reducing the development difficulty. Many device drivers use memory mapping to associate a segment of the user's space with the device's memory, which is actually access to device memory whenever the memory is read and written within the assigned address range. Access to the device files is also equivalent to access to the memory area, which means that memory can be accessed through the file manipulation interface. The x server in Linux is an example of using memory maps to achieve direct high-speed access to video card memory.

Friends who are familiar with file operations will know that there are mmap methods in the file_operations structure that call this method to access memory through files when the user performs a mmap system call--but before calling the file system Mmap method, the kernel also needs to process the allocated memory area (Vma_ struct), the establishment of page tables and other work. For specific mapping details do not introduce, it should be emphasized that the establishment of the page table can use the Remap_page_range method to set up all the mapping area of the page table, or use the Vma_struct Nopage method in the Site page page to create a page table. The first method is simple, convenient and fast compared to the second method, but the flexibility is not high. A call to all page tables is stereotyped and does not apply to situations where page tables need to be created on the spot-such as mapping areas that need to be expanded or the case in our example below.

Our example here would like to use a memory map to map some of the virtual memory in the system kernel to user space for the application to read-you can use it to carry out massive information transmission of kernel space to user space. So we're going to try to write a virtual character device driver that maps the system kernel space to user space -mapping kernel virtual memory to the user's virtual address. You have seen from the previous section that there are two virtual addresses in the Linux kernel space: One is physical and logical continuous physical memory-mapped virtual addresses, and the other is a logically sequential, but not physically contiguous, Vmalloc allocated memory virtual address. Our example program will demonstrate the entire process of mapping a vmalloc-allocated kernel virtual address to a user's address space.

There are two main problems to be solved in the procedure:

The first is how to correctly convert the kernel virtual memory allocated by Vmalloc into physical addresses.

Because the memory map first obtains the mapped physical address, it can then be mapped to the required user virtual address. We've seen internal nuclear physics. The addresses in the memory-mapped region can be converted into actual physical memory addresses by kernel function Virt_to_phys, but for Vmalloc allocated kernel virtual addresses cannot be directly translated into physical addresses, so we have to "take care" of this part of virtual memory- Turn it into an address in the memory-mapped area of internal nuclear physics and then change it to a physical address using Virt_to_phys.

The conversion process requires the following steps:

A To find the page table corresponding to the Vmalloc virtual memory and find the corresponding page table entry.

B Gets the page pointer for the page table entry

c to get the corresponding kernel physical memory mapped region address through the page .

As shown in the following illustration:

Second, when accessing the Vmalloc allocation area, if you find that virtual memory has not yet been mapped to a physical page, you will need to handle the "fault exception". Therefore, we need to implement the Nopaga operation in the memory area to return the mapped physical page pointer, in our case, to return the address in the kernel physical memory-mapped area of the above procedure . because Vmalloc assigned virtual address and physical address of the corresponding relationship is not assigned to determine, must be in the Site page pages, so you can not use the Remap_page_range method, can only use VMA Nopage method one page of the establishment.

 

 

Program Composition

MAP_DRIVER.C, which is a virtual character driver loaded in a modular form. The driver is responsible for mapping a certain length of kernel virtual address (Vmalloc allocated) to a device file. The main function has--vaddress_to_kaddress () is responsible for the Vmalloc assigned address for Page table parsing, to find the corresponding kernel physical mapping address (kmalloc assigned address); Map_nopage () When the process accesses a VMA page that does not currently exist, look for the physical page that corresponds to the address and return a pointer to the page.

TEST.C it uses the device files corresponding to the driver module to read kernel memory in user space. As a result, you can see the contents of the kernel virtual address (ok!), which is displayed on the screen.

Execution Steps

Compile map_driver.c for MAP_DRIVER.O module, see the specific parameters makefile

Loading module: Insmod map_driver.o

Generate the corresponding device file

1 under/proc/devices find map_driver corresponding device life and device number: grep mapdrv/proc/devices

2 Establishment of equipment files Mknod mapfile C 254 0 (device number 254 in my system)

Using Maptest to read the Mapfile file, the information taken from the kernel is printed to the screen.

All Programs Download Mmap.tar (thanks to Martin Frey, the subject of the program is inspired by him)


[1] The statically allocated memory is the compiler allocates memory according to the source program when compiling the program. Dynamically allocating memory is the Run-time library function that allocates memory after the program is compiled. Static allocation is faster and more efficient, but with limited limitations, because it is before the program runs. Dynamic assignment executes while the program is running, so it is slow but highly flexible.

[2] The term "BSS" has been for some years, and it is the abbreviation for block started by symbol. Because uninitialized variables do not have corresponding values, they do not need to be stored in an executable object. But because the C standard enforces that uninitialized global variables are given special default values (essentially 0 values), so the kernel loads the variable (unassigned) into memory from the executable code, and then maps 0 pages to the memory of the slice, and the uninitialized variables are given a value of 0. This avoids the explicit initialization of the target file, reducing the waste of space (from Linux kernel development)

[3] There are also situations where memory must be required continuously, such as memory used in DMA transfers, which must be continuously allocated because it does not involve a page mechanism.

[4] The idea of this storage pool is widely used in computer science, such as database connection pool, memory access pool, and so on.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.