Linux virtual memory and physical memory

Source: Internet
Author: User

Linux virtual memory and physical memory

Linux virtual memory and physical memory.

First, let's look at the virtual memory:

Level 1 Understanding

1. Each process has its own 4G memory space. The memory space of each process has a similar structure.

2. when a new process is created, it will create its own memory space. The data and code of the process will be copied from the disk to its own process space and where the data is located, all records are recorded by task_struct in the process control table, and a linked list is recorded in task_struct to record the memory space allocation, addresses with data, addresses without data, readable, and writable, you can use this linked list to record

3. the memory space allocated by each process is mapped to the corresponding disk space.

Problem:

The computer clearly does not have that much memory (n processes require n * 4 GB) memory

To create a process, you need to copy the program files on the disk to the memory corresponding to the process. If multiple processes corresponding to a program are involved, memory is wasted!

Level 2 Understanding

1. The 4G memory space of each process is only virtual memory space. Each time an address accesses the memory space, the address must be translated into the actual physical memory address.

2. All processes share the same physical memory. Each process only maps the virtual memory space that it currently needs and stores it in the physical memory.

3. The process needs to use a page table to record the data in the memory address on the physical memory, the memory address, and the memory address.

4. Each table item in the page table is divided into two parts. The first part records whether the page is in the physical memory, and the second part records the address of the physical memory page (If yes)

5. When a process accesses a virtual address to view the page table, if the corresponding data is not in the physical memory, the page is missing.

6. the page missing Exception Processing Process is to copy the data required by the process from the disk to the physical memory. If the memory is full and there is no blank space, find a page to overwrite it, of course, if the covered page has been modified, you need to write this page back to the disk.

Summary:

Advantages:

1. since the memory space of each process is consistent and fixed, the linker can set the memory address when linking executable files, instead of worrying about the actual memory address of the data, this is the benefit of having independent memory space

2. when different processes use the same code, such as the code in the library file, the physical memory can store only one such code, different processes only need to map their virtual memory to save memory.

3. When the program needs to allocate continuous memory space, it only needs to allocate continuous space in the virtual memory space, instead of the continuous space in the actual physical memory space. fragments can be used.

In addition, in fact, during the creation and loading of each process, the kernel only creates a virtual memory layout for the process. Specifically, it initializes the memory-related linked list in the process control table, in fact, the program data and code (such. text. data Segment) copy to the physical memory, just to establish a ing between the virtual memory and disk files (called memory ing), wait until the corresponding program runs, in order to copy data through a page missing exception. In addition, when a process is running, the memory needs to be dynamically allocated. For example, when malloc is used, only virtual memory is allocated, that is, the page table items corresponding to the virtual memory are set accordingly, A page missing exception occurs only when the process actually accesses this data.

Supplementary Understanding:

Virtual Memory involves three concepts: virtual storage space, disk space, and memory space.

It can be considered that all virtual space is mapped to the disk space (in fact, it is also mapped to the disk space as needed through mmap), and The ing position is recorded by the page table, when an address is accessed, you can check whether the data is in memory by using the valid bits in the page table. If not, a page error occurs, copy the data corresponding to the disk to the memory. If there is no idle memory, select sacrifice page to replace other pages.

Mmap is used to create a ing from a virtual space to a disk space. You can map a virtual space address to a disk file. If this address is not set, it is automatically set by the system, the function returns the corresponding memory address (Virtual Address). When accessing this address, you need to copy the content on the disk to the memory, and then you can read or write the content, at last, manmap can be used to replace the data in the memory to the disk, that is, to remove the ing between the virtual space and the memory space. This is also a method for reading and writing disk files, it is also a way for processes to share data.

Next we will discuss the physical memory:

Applying for memory in the kernel state is more direct than applying for memory in the user State. It does not use the latency allocation memory technology in the user State. The kernel believes that once a kernel function applies for memory, it must immediately satisfy the request for memory, and this request must be correct and reasonable. On the contrary, the kernel always tries to delay the allocation of physical memory for user State requests, and the user process always obtains the right to use a virtual memory zone first, finally, a real physical memory is obtained through a page missing exception.

1. Physical memory kernel ing

In the IA32 architecture, the kernel virtual address space is only 1 GB (from 3 GB to 4 GB). Therefore, you can directly map a 1 GB physical memory (that is, regular memory) to the kernel address space, however, physical memory larger than 1 GB (high-end memory) cannot be mapped to the kernel space. To this end, the kernel adopts the following method so that the kernel can use all the physical memory.

1). High-end memory cannot be fully mapped to the kernel space, that is, these physical memories do not have the corresponding linear address. However, the kernel allocates the corresponding page box descriptor to each physical page, and all the page box descriptors are saved in the mem_map array, therefore, the linear address of each page frame descriptor is fixed. At this time, the kernel can use alloc_pages () and alloc_page () to allocate high-end memory because these functions return the linear address of the page box descriptor.

2). The last MB of kernel address space is used to map high-end memory. Otherwise, high-end memory without linear addresses cannot be accessed by the kernel. The kernel ing of these high-end memory is obviously temporarily mapped; otherwise, only MB of high-end memory can be mapped. When the kernel needs to access high-end memory, address ing will be performed temporarily in this region, and will be used for ing other high-end memory after use.

Because kernel ing of high-end memory is required, the physical memory size that can be mapped directly is only 896 MB, and this value is stored in high_memory. Shows the linear address range of the kernel address space:

We can see that the kernel uses three mechanisms to map high-end memory to the kernel space: Permanent kernel ing, fixed ing, and vmalloc.

2. Physical memory management mechanism

Based on the ing principle in the physical kernel space, physical memory management methods are also different. The management mechanisms of physical memory in the kernel mainly include partner algorithms, slab cache and vmalloc. Both the partner algorithm and slab cache allocate physical memory in the physical memory ing area, while the vmalloc mechanism allocates physical memory in the high-end memory ing area.

Partner Algorithm

Partner algorithms are responsible for allocating and releasing large contiguous physical memory, with page boxes as the basic unit. This mechanism can avoid external fragments.

Per-CPU page box high-speed cache

The kernel requests and releases a single page box frequently. The cache contains pre-allocated page boxes to meet the single page box requests sent by the local CPU.

Slab Cache

The slab cache is used to allocate small physical memory blocks. It is also used as a high-speed cache for objects that are frequently allocated and released in the kernel.

Vmalloc Mechanism

The vmalloc mechanism enables the kernel to access non-consecutive physical page boxes through continuous linear addresses, so as to maximize the use of high-end physical memory.

3. Physical memory allocation

When the kernel sends a memory application request, different memory splitters are enabled Based on the kernel function call interface.

3.1 partition page box distributor

The partition page frame allocator is used to process memory allocation requests for consecutive page frames. The partition page box manager is divided into two parts: the front-end management zone distributor and the partner system, such:

The management area distributor is responsible for searching a management area that can meet the size of the Request page block. In each management area, the partner system is responsible for the assignment of specific page boxes. In order to achieve better system performance, the application of a single page box is directly completed through the per-CPU page box high-speed cache. The Allocator uses several functions and macros to request page boxes. The encapsulation relationships between them are shown in.

These functions and macros encapsulate the core assignment function _ alloc_pages_nodemask () to form an allocation function meeting different allocation requirements. The alloc_pages () series functions return the homepage frame descriptor of the physical memory, __get_free_pages () series functions return the linear address of the memory. 3.2 slab distributor

The slab splitter was initially proposed to solve the internal fragmentation of physical memory. It treats common data structures in the kernel as objects. The slab splitter creates a high-speed cache for each object. The kernel allocates and releases this object in this cache. The structure of an object's slab distributor is as follows:

It can be seen that the cache of each object is composed of several slab, and each slab is composed of several page boxes. Although the slab splitter can be divided into smaller memory blocks than a single page, all the memory required by it is allocated through the partner algorithm. Slab High-speed cache is divided into dedicated cache and general cache. A dedicated cache is used to create a cache for a specific object, such as a memory descriptor. The general cache is for general purposes and is suitable for allocating physical memory of any size. Its interface is kmalloc (). 3.3 non-contiguous memory zone Memory Allocation

The kernel uses vmalloc () to apply for non-consecutive physical memory. If the application is successful, this function returns the starting address of the continuous memory area. Otherwise, NULL is returned. The memory applied for by vmalloc () and kmalloc () is different. The linear and physical addresses of the memory applied for by kmalloc () are continuous, while vmalloc () the Applied memory linear address is continuous while the physical address is discrete. The two addresses are mapped through the kernel page table. Vmalloc () is easy to understand:

1). Find a new continuous linear address space;

2). assign a group of non-consecutive page boxes in sequence;

3). Create a ing relationship between the linear address space and non-consecutive page boxes, that is, modify the kernel page table;

The memory allocation principle of vmalloc () is similar to that of user-mode memory allocation. It uses continuous virtual memory to access discrete physical memory, in addition, the virtual address and the physical address are connected through the page table. In this way, the physical memory can be effectively used. However, it should be noted that vmalloc () is allocated immediately when applying for physical memory, because the kernel considers this memory allocation request to be legitimate and urgent; on the contrary, when the user State has memory requests, the kernel always delays as much as possible. After all, the user State and the kernel state are not at a privileged level.

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.