How to manage Linux memory

Source: Internet
Author: User

Linux Memory management principlesin the user state, the kernel-state logical address refers specifically to the address before the linear offset of the Linux kernel virtual 3. Partner algorithm and slab allocator 16 pages of RAM because the maximum contiguous memory size is 16 pagespage up to 16 pages, so 16/2order (0) Bimap has 8 bit bits two page box Page1 and Page2 composition with two page box Page3 and Page4, which have a bit bit between the two blocksOrder (1) Bimap has 4 bit bit order (2) Bimap 2 page allocation process with 4 bit bits when we need an order (1) of free page blocks, order (0): 5, 10order (1): 8 [8,9]order (2): 12 [ 12,13,14,15]order (3): A free page block that assigns it to the user and removes it from the linked list. On order (1) There is no free page block on order (1) on the Free page block Order (1) The [12,13] page block in the idle list is returned to the user. Order (0): 5, 10order (1): [14,15]order (2): Order (3): Recycling process when we recycle pages (order 0), perform the following steps: 1, locate the bit that represents page 0 in the order (11) partner Bitmap, The calculation uses the following publicity: index = Page_idx >> (order + 1) = one >> (0 + 1) = 5Buddy the effort to avoid internal debrisIn addition,slab splitter: Resolves internal fragmentation issues KernelCache 4. Page Recycling/focus mechanism About the use of the page Linux kernel memory management analysis of Linux Slub Distributor analysis on reading and writing of Linux kernel filesanalysis on memory management of Linux kernel
Page Recycling Overview Pfra Recovery General Page A brief analysis of the Linux kernel virtual file system about memory mapping Linux kernel file read and write analysis of which pages the recycle determines the least recently used Reverse mapping   Page swap in swap out the last must kill5. Memory Management ArchitectureAddress mapping virtual Address management
Physical memory management establish address map kernel Space management page swap in swap out Linux kernel page recycle analysis user's stack of memory management user partner algorithm avoids physical memory fragmentation slab allocator Linux slub allocator analysis on Linux memory management analysis of Linux page recycling
Http://www.ibm.com/developerworks/cn/linux/l-memmod/? S_tact=105agx52&s_cmp=tech-51cto exploring the Linux memory model understanding the memory model used by Linux is the first step toward a greater mastery of Linux design and implementation, so this article outlines the Linux memory model and management. Linux uses a single monolithic structure (Monolithic), which defines a set of primitives or system calls to implement the services of the operating system, such as process management, concurrency control, and memory management services that run in Super Mode in several modules. Although Linux still maintains a symbolic representation of the Segment control unit model for compatibility reasons, the model is rarely used in practice. The main issues related to memory management are:
    • The management of virtual memory, which is a logical layer between application requests and physical memory.
    • Management of physical memory.
    • Kernel virtual memory management/kernel memory allocator, which is a component used to satisfy requests for memory. This request for memory may be from the kernel or from the user.
    • Management of virtual address space.
    • Swapping and caching.
This article explores the following issues that can help you understand the Linux insider from the perspective of memory management in the operating system:
    • Segment control Unit model, typically dedicated to Linux
    • Paging model, usually dedicated to Linux
    • Knowledge of physical memory
Although this article does not detail how the Linux kernel manages memory, it introduces knowledge about the entire memory model and the way the system is addressed, which provides a framework for further learning. This article focuses on the x86 architecture, but the knowledge in this article applies equally to other hardware implementations. x86 memory Architecture in the x86 architecture, memory is divided into 3 types of addresses:
    • A logical address is the address of a storage location, which may correspond directly to a physical location, or it may not correspond directly to a logical location. Logical addresses are typically used when requesting information from the controller.
    • A linear address (or linear address) (or a flat addressing space) is a memory that is addressed from 0. Each subsequent byte can be referenced sequentially using the next number (0, 1, 2, 3, and so on) until the end of memory. This is how most non-Intel CPUs are addressed. The Intel® architecture uses a segmented address space where memory is divided into segments of 64KB, and a segment register always points to the base address of the segment that is currently being addressed. The 32-bit pattern in this architecture is treated as a flat address space, but it also uses segments.
    • Physical addresses (physical address) are addresses that are represented by bits in the physical address bus. The physical address may be different from the logical address, and the memory snap-in can convert the logical address into a physical address.
The CPU uses two units to convert the logical address into a physical address. The first is called a segmented unit (segmented unit), and the other is called a paging unit (paging unit). Figure 1. Convert address space using two types of cells let's introduce the Segment control unit model. Back page first paragraph control Unit model overview The basic idea behind this segmented model is to manage the memory segment. In essence, each segment is its own address space. The segments consist of two elements:
    • Base address contains addresses for a physical memory location
    • Length value specifies the length of the segment
The segment address also includes two components--  segment selector (segment selector)   and intra-segment offset (offset into the segment). The segment selector Specifies the segment to use (that is, the base and length values), and the offset component within the segment specifies the offset of the actual memory location relative to the base address. The physical address of the actual memory location is the sum of the base value and the offset. If the offset exceeds the length of the segment, a protection violation error is generated. The above can be summarized as follows: The fragment cell can be represented as a segment: The offset model can also be represented as a segment identifier: Offset each segment is a 16-bit field, called the segment identifier (segment identifier)   or segment selector (segment s Elector). The x86 hardware includes several programmable registers, called the   segment registers (segment register), which are stored in the segment selector. These registers are  cs (code snippet), DS (data segment), and  SS (stack segment). Each segment identifier represents a segment that uses a 64-bit (8-byte) segment Descriptor (segment descriptor)  . These segment descriptors can be stored in a GDT (Global descriptor Table) and can also be stored in a LDT (Local Descriptor Table (descriptor)). Figure 2. Segment Description Fu Register relationship each time the segment selector is loaded into the segment register, the corresponding segment descriptor is loaded from memory into the matching non-programmable CPU register. Each segment descriptor is 8 bytes long, representing a segment in memory. These are stored in the LDT or GDT. The segment Descriptor entry contains a pointer and a 20-bit value (the Limit field) that points to the first byte in the related segment represented by the Base field, which represents the size of the middle of the memory. Some other fields also contain special properties, such as the type of priority and segment (cs  or  ds). The type of the segment is represented by a 4-bit type field. Since we are using non-programmable registers, we do not refer to GDT or LDT when translating a logical address into a linear address. This can speed up the conversion of memory addresses. The segment selector contains the following:
    • A 13-bit index that identifies the corresponding segment descriptor entries contained in the GDT or LDT
    • The TI (Table Indicator) flag Specifies whether the segment descriptor is in GDT or in the LDT, if the value is 0, the segment descriptor is in the GDT, and if the value is 1, the segment descriptor is in the LDT.
    • RPL (Request privilege level) defines the current levels of privilege for the CPU when the corresponding segment selector is loaded into the segment register.
Because the size of a segment descriptor is 8 bytes, its relative address in the GDT or the LDT can be computed in this way: the high 13 bits of the segment selector are multiplied by 8. For example, if the GDT is stored at address 0x00020000, and the segment selector's Index field is 2, then the address of the corresponding segment descriptor is equal to (2*8) + 0x00020000. The total number of segment descriptors that can be stored in the GDT equals (2^13-1), which is 8191. Figure 3 shows the linear address obtained from the logical address. Figure 3. Get a linear address from a logical address so what's the difference in a Linux environment? Back to top Linux segment control unit Linux has modified the model slightly. I noticed that Linux uses this segmented model in a limited way (primarily for compatibility reasons). In Linux, all segment registers point to the same segment address range-in other words, each segment register uses the same linear address. This allows the number of segment descriptors used by Linux to be limited, allowing all descriptors to be stored in the GDT. This model has two advantages:
    • Memory management is simpler when all processes use the same segment register values (when they share the same linear address space).
    • Portability can be achieved on most architectures. Some RISC processors can also support fragmentation in this limited way.
Figure 4 shows the modifications to the model. Figure 4. In Linux, segment registers point to the same address set segment descriptor Linux uses the following segment descriptor:
    • Kernel Code Snippet
    • Kernel data segment
    • User code Snippet
    • User Data segment
    • TSS segment
    • Default LDT Segment
These segment registers are described in detail below. The values in the kernel snippet (kernel code segment) descriptor in the GDT are as follows:
    • Base = 0x00000000
    • Limit = 0xFFFFFFFF (2^32-1) = 4GB
    • G (granularity flag) = 1, which indicates that the size of the segment is in page units
    • S = 1, representing normal code or data segments
    • Type = 0xa, which represents the code snippet that can be read or executed
    • DPL value = 0, which indicates kernel mode
The linear address associated with this segment is 4 gb,s = 1 and type = 0xa represents the code snippet. The selector is in the CS register. The macro used in Linux to access this segment selector is _kernel_cs. The value of the kernel segment (kernel data segment) descriptor is similar to the value of the kernel snippet, and the only difference is that the Type field has a value of 2. This indicates that this segment is a data segment and the selector is stored in the DS register. The macro used in Linux to access this segment selector is _kernel_ds. User code segment is shared by all processes in user mode. The value of the corresponding segment descriptor stored in the GDT is as follows:
    • Base = 0x00000000
    • Limit = 0xFFFFFFFF
    • G = 1
    • S = 1
    • Type = 0xa, which represents the code snippet that can be read and executed
    • DPL = 3, indicating user mode
In Linux, we can access this segment selector through the _USER_CS macro. In the user data segment descriptor, the only different field is Type, which is set to 2, which means that the data segment is defined as readable and writable. The macro used in Linux to access this segment selector is _user_ds. In addition to these segment descriptors, the GDT contains two additional segment descriptors--TSS and LDT segments for each process created. Each TSS segment (TSS segment) describes nonalphanumeric represents a different process. The TSS maintains hardware context information for each CPU, which helps to effectively switch contexts. For example, in u->k mode switching, the x86 CPU is the address of the kernel-mode stack obtained from TSS. Each process has its own TSS descriptor for the corresponding process stored in the GDT. The values for these descriptors are as follows:
    • Base = &TSS (Address of the TSS field corresponding to the process descriptor; for example &tss_struct) this is defined in the Schedule.h file of the Linux kernel
    • Limit = 0xeb (the size of the TSS segment is 236 bytes)
    • Type = 9 or 11
    • DPL = 0. User mode does not have access to TSS. The G flag is cleared
All processes share the default LDT segment. By default, it contains an empty segment descriptor. This default LDT segment descriptor is stored in the GDT. The size of the LDT generated by Linux is 24 bytes. The default is 3 entries: The calculation task to calculate the maximum number of entries in the GDT, you must first understand nr_tasks (this variable determines the number of concurrent processes that Linux can support-the default value in the kernel source code is 512, and a maximum of 256 concurrent connections to the same instance are allowed). The total number of entries that can be stored in a GDT can be determined by the following formula: In these 8,192 segment descriptors, Linux uses 6 segment descriptors, and 4 descriptors are used for APM features (Advanced Power management Features), and 4 entries remain unused in the GDT. Therefore, the number of entries in the GDT equals 8192-14, which is 8180. In any case, the number of entries in the GDT is 8180, so: 2 * nr_tasks = 8180 Nr_tasks = 8180/2 = 4090 (Why use 2 * nr_tasks?). Because for each process that you create, you will not only load a TSS descriptor--the content to maintain context switching, but also load a LDT descriptor. This limitation of the number of processes in the x86 architecture is a component in Linux 2.2, but since the 2.4 kernel began, this problem is no longer present, in part because of the use of hardware context switching (which inevitably uses TSS) and replacing it with process switching. Next, let's take a look at the paging model. Back to page The first page of the model overview The paging unit is responsible for translating the linear address into a physical address (see Figure 1). The linear address is divided into the form of a page. These linear addresses are actually contiguous-the paging unit maps these contiguous memory into the corresponding contiguous physical address range (called the page box). Note that the paging unit visually divides the RAM into a fixed-size page box. Because of this, paging has the following advantages:
    • The permissions that make up the entire set of linear addresses for the page are saved in access rights defined for a page
    • The size of the page is equal to the size of the page box
The data structure that maps these pages into a page box is called a page table. The page table is stored in primary storage and can be properly initialized by the kernel before the paging unit is enabled. Figure 5 shows the page table. Figure 5. Page table converting a page into a page box note that the set of addresses contained in Page1 matches exactly the set of addresses contained in page Frame1. In Linux, paging units are used more than segmented units. As mentioned earlier in the Linux fragmentation model, each segment descriptor is linearly addressed using the same set of addresses, minimizing the need to convert logical addresses to linear addresses using segmented cells. By using more paging units rather than segmented units, Linux can greatly facilitate memory management and portability across different hardware platforms. Fields used during pagination Let's look at the fields that are used to specify pagination in the x86 schema, which helps to implement paging in Linux. The paging cell enters a linear field that is the result of the segmented cell output, and then divides it into the following 3 fields:
    • Directory is represented by the number of MSB (most significant bit, which is the position of the largest bit in the binary number--MSB sometimes referred to as the leftmost bit).
    • Table is represented by 10 bits in the middle.
    • Offset is indicated by the LSB. (Least significant bit, which is the position of the bit of a given cell value in a binary integer, that determines whether the number is odd or even.) The LSB is sometimes called the right-most bit. This is similar to the number with the lightest weight, which is the number at the rightmost position. )
The process of translating a linear address to a corresponding physical location consists of two steps. The first step is to use a translation table called page directory (page directory), which is converted to a page table, and the second step uses a conversion table called page Table (page table plus offset plus page box). Figure 6 illustrates this process. Figure 6. When the paging field starts, the physical address of the page directory is first loaded into the CR3 register. The directory field in the linear address determines the page table entry in the page directory that points to the appropriate pages. The address in the table field determines the entry in the page table that contains the physical address of the page box for the page. The Offset field determines the relative position in the page box. Because the Offset field is 12 bits, each page contains 4 KB of data. The following summarizes the calculation of the physical address:
    1. CR3 + Page Directory (MSB) = point to Table_base
    2. Table_base + Page Table (10 middle bit) = point to Page_base
    3. Page_base + Offset = Physical Address (Get page box)
Because both the page Directory field and the page Table segment are 10-bit, the maximum addressable range of 1024*1024 Kb,offset can be 2^12 (4096 bytes). As a result, the addressable upper limit for the page directory is 1024*1024*4096 (equal to 2^32 memory units, which is 4 GB). Therefore, on the x86 architecture, the total addressable limit is 4 GB. Extended paging is implemented by removing page table conversion tables, and the division of linear addresses is done between the page directory (MSB) and the offset (LSB). The 4 MB boundary (2^22) of the page frame is formed by the LSB. Extended paging can be used with normal paging models and can be used to map large, continuous linear addresses to their corresponding physical addresses. The operating system deletes the page table to provide an extended page table. This can be achieved by setting the PSE (page size extension). The 36-bit PSE expands the 36-bit physical address to support 4 MB pages while maintaining a 4-byte page directory entry, which provides a way to address more than 4 GB of physical memory without requiring too much modification to the operating system. This approach has some practical limitations for on-demand paging. Back to top the paging model in Linux although paging in Linux is similar to normal paging, the x86 architecture introduces a three-level page table mechanism, including:
    • The page global directory, or PGD, is the highest level of abstraction for a multilevel page table. Each level of the page table handles different sizes of memory-this global directory can handle 4 MB of area. Each item points to a lower-level table for a smaller directory, so PGD is a page table directory. When the code traverses this structure (some drivers do), it is called the "Traverse" page table.
    • The Page Intermediate directory (page middle directory), or PMD, is the middle tier of the page table. In the x86 architecture, PMD does not exist in the hardware, but it is merged with the PGD in the kernel code.
    • Page table entries, or PTEs, are the lowest layer of the page table, which processes the page directly (see PAGE_SIZE), which contains the physical address of a page, and also contains a bit that indicates whether the entry is valid and whether the related page is in physical memory. Entry
To support large memory areas, Linux also uses this three-level paging mechanism. When you do not need to be a large memory area, you can define PMD as "1" and return a two-level paging mechanism. The paging level is optimized at compile time, and we can enable level two and three paging (using the same code) by enabling or disabling the intermediate directory. The 32-bit processor uses PMD paging, while the 64-bit processor uses the PGD page. Figure 7. Level three paging as you know, in a 64-bit processor:
    • MSB reserved Unused
    • The LSB is represented by the page offset
    • The remaining 30 bits are divided into:
    • 10-Bit for page table
    • 10-bit for page global catalog
    • 10-bit for the page intermediate directory
We can see from the schema that we actually use 43 bits for addressing. Therefore, in a 64-bit processor, the memory that can be used effectively is 2 of the 43-time side. Each process has its own page directory and page table. To refer to a page box that contains actual user data, the operating system (on the x86 schema) first loads the PGD into the CR3 register. Linux stores the contents of the CR3 register in the TSS segment. Thereafter, whenever a new process is executed on the CPU, another value is loaded from the TSS segment into the CR3 register. This causes the paging unit to reference a set of correct page tables. Each entry in the PGD table points to a page box that contains a set of PMD entries, and each entry in the PDM table points to a page box that contains a set of PTEs entries, and each entry in the PDE table points to a page box that contains the user data. If the page you are looking for has been turned out, a swap entry is stored in the PTE table (in the case of a page fault) to locate which box is reloaded into memory. Figure 8 illustrates that we continuously add offsets to the page tables of all levels to map the corresponding page box entries. We get the offset by entering the linear address that is the output of the segmented unit and dividing the address. To divide a linear address into a corresponding page table element, you need to use a different macro in the kernel. This article does not introduce these macros in detail, let's take a simple look at the linear address partitioning method in Figure 8来. Figure 8. Linear address reserved page box with different address lengths Linux has reserved several page boxes for kernel code and data structures. These pages will never be transferred out to disk. The linear address from 0x0 to 0xc0000000 (page_offset) can be referenced by user code and kernel code. Linear addresses from Page_offset to 0xFFFFFFFF can only be accessed by the kernel code. This means that in 4 GB of memory space, only 3 GB can be used for user applications. How to enable paging the Linux process uses a paging mechanism that consists of two phases:
    • At startup, the system sets the page table for 8 MB of physical memory.
    • The second phase then completes the mapping of the remaining physical addresses.
During the start-up phase, the startup_32 ()   Call is responsible for initializing the paging mechanism. This is in Arch/i386/kernel/head. The S file is implemented in the. This 8 MB mapping occurs in the address above the page_offset . This initialization begins with a statically defined compile-time array (swapper_pg_dir). At compile time it is put to a specific address (0x00101000). This operation creates a page table for two pages-- pg0  and  pg1 --that are statically defined in the code. The size of these page boxes defaults to 4 KB, unless we set the page size extension (see <   Expand Pagination   section for more content on the PSE). The data address that this global array points to is stored in the  cr3  register, which I think is the first stage of setting up a paging unit for a Linux process. The rest of the page items are completed in the second phase. The second stage is done by the method call  paging_init ()  . On a 32-bit x86 architecture, RAM maps to  PAGE_OFFSET  between addresses that are represented by the 4GB upper bound (0xFFFFFFFF). This means that approximately 1 GB of RAM can be mapped at Linux startup, which is done by default. However, if someone has set up  highmem_config, then it is possible to map more than 1 GB of memory to the kernel-remember that this is a temporary arrangement. Can be implemented by calling  kmap ()  . Back to the top physical memory area I've shown you that (32-bit architecture) The Linux kernel divides virtual memory by a 3:1 ratio: 3 GB of virtual memory for user space, and 1 GB of memory for kernel space. Both the kernel code and its data structure must be in this 1 GB address space, but for this address space, the larger consumer is the virtual map of the physical address. This problem occurs because if a piece of memory is not mapped to its own address space, the kernel cannot manipulate the memory. Therefore, the maximum amount of memory that the kernel can handle is the virtual address space that can be mapped to the kernel minus the space that needs to be mapped to the kernel code itself. As a result, a x86-based Linux system can use up to 1 GB of physical memory maximum. In order to cater to the needs of a large number of users, support more memory, improve performance, and create a schema-independent memory description method, the Linux memory model must be improved. To achieve these goals, the new modelPartition the memory to allocate space for each CPU. Each space is called a   node, and each node is divided into some   areas. A region, which represents a range in memory, can be further divided into the following types:
    • ZONE_DMA (0-16 MB): Contains the range of memory in the low-end physical memory area required by the ISA/PCI device.
    • zone_normal (16-896 MB): The memory range of physical memory directly mapped by the kernel to a high-end range. All kernel operations can only be done using this memory area, so this is an area that is critical to performance.
    • Zone_highmem (896 MB and higher memory): Other available memory that the kernel in the system cannot image to.
The concept of a node is implemented in the kernel using a struct pglist_data structure. Zones are described using struct zone_struct structures. The physical page box is represented by the struct page structure, all of which are stored in the global structure array, struct MEM_MAP, which stores the normal_zone at the beginning of the array. The basic relationship between nodes, regions, and page boxes is shown in 9. Figure 9. The relationship between nodes, regions, and page boxes is achieved by supporting virtual memory extensions for Pentium II (using pae--physical Address extension--on 32-bit systems to access up to GB of memory) and 4 GB of physical memory (Also On 32-bit systems), the high-end memory area appears in kernel memory management. This is a concept referenced on the x86 and SPARC platforms. Typically, this 4 GB of memory can be accessed by mapping Zone_highmem to Zone_normal by using Kmap (). Note that it is unwise to use more than GB of memory on a 32-bit architecture, even if PAE is enabled. (PAE is an Intel-provided memory address extension mechanism that provides support for applications by using the Address windowing Extensions API in the host operating system, allowing the processor to extend the number of bits that can be used to address physical memory from 32-bit to 36-bit.) The management of this physical memory area is implemented through a regional allocator (zone allocator). It is responsible for dividing the memory into many areas, which can be used as an allocation unit for each region. Each specific allocation request takes advantage of a set of regions from which the kernel can be allocated in order from high to low. For example:
    • A request for a user's page can be satisfied first from the "normal" area (Zone_normal);
    • If it fails, try to start with Zone_highmem.
    • If this fails, try it from ZONE_DMA.
The list of areas for this allocation includes the Zone_normal, Zone_highmem, and ZONE_DMA regions in turn. On the other hand, requests for DMA pages may only be satisfied from the DMA zone, so the list of regions for this request contains only the DMA region. Back to top conclusion memory management is a very large, complex and time-consuming task and a very difficult task, because we need to carve out a model to design how the system operates in a real multi-program environment, which is a very difficult job. Interactive components, such as scheduling, paging behavior, and multi-process, have presented us with quite challenging challenges.  I hope this article will help you understand some of the basics that are needed to accept the Linux Memory Management challenge and give you a starting point. Memory-related concepts in Linux and memory applications in several ways 1 physical address, MMU related concepts Intel X86 has an IO space, relative to the memory space, accessed through in-out directives. Most arm PowerPC have only memory space. The memory space is accessed through the address and pointer. program, the variables used in the program run are in memory space. Physical Address unsigned char *p = (unsigned char*) 0xf000ff00;*p = 1;
    • 0xf000ff00 This address is for x86 is the 16bit segment address +16bit offset address, that is, 0xf000 * + 0xff00 = 0xf0000 + 0xff00 = 0xfff00 address for arm, such as the use of segment address of the processor, is the space 0xf000f F00. x86 processor with the actual address to do the first jump (soft restart): typedef void (*lpfunction) ();//define a function pointer typelpfunction lpreset = (lpfunction) 0XF000FFF0; Get a pointer that point to the addrlprest (); Go to the function at addr
      • The MMU Memory Management unit, which provides virtual and physical address mapping, memory access rights protection, and cache caching control, is supported by the RAM management. Kernel uses the MMU to make the user feel that they can use a lot of memory space, and let the developer write the program without regard to physical capacity. Tlb:translation lookaside buffer, convert bypass cache. Is the core part of the MMU, caching a small amount of virtual-physical relationships, is the conversion table of the cache, also known as the "fast table." Ttw:translation table walk, conversion table roaming. When the TLB does not need a conversion pair, it passes through the in-memory conversion table (often a multilevel page table, from the page Memento address register to find the page table, one layer at a time until the code page. Access to the virtual-physical relationship, TTW successfully written to the TLB. After writing, if the permissions are correct, access the cache or memory to find the appropriate data. If not allowed, the MMU sends a memory exception to arm. Linux Level three page table
          • Pgd,page Global Directory (page directory);
          • Pmd,page Middle Directory (page directory);
        * The first two interiors become Pde, page directory entries, and page directories Entry.
          • Pte,page Table Entry (page table entries, one physical page for each table item).
        Related macros are visible: the process of getting a page table of PTEs from a virtual address Level three query (page table walk):
          • A virtual address that describes a process that occupies resources and needs to be accessed by getting a first-level page table entry
          • By getting a Level two page table entry
          • By getting the target page table entry
        More details can be viewed reference [1] Note: Linux 2.6 supports processors without MMU. It integrates uclinux to support the mmu-less system in order to be compatible with embedded systems. 2 Linux memory management contains the MMU processor that allows the process to access space up to 4g,0-3g is the user space,3-4g is kernel space. At the end of 3G, that is 0x86. Each process has its own page table, independent of each other. Kernel space is mapped by the kernel, and is not fixed with process changes. The 1G kernel space is divided into:
          • Physical Memory Map Area (0-896MB), linear mapping, general memory. When physical memory is greater than 896MB, the excess is referred to as high memory.
          • The area after 896MB:
          • Vmalloc Distributor Area (front and back with isolation, address Vmalloc_start ~ vmalloc_end)
          • High-end Memory map area (high-end memory can only be mapped here) (pkmap_base) More about high-end memory
          • This section of the dedicated page map area (in practice Fixaddr_start ~ fixaddr_top) needs to be configured.
          • Reserved area (in practice fixaddr_top to 4G area)
        When memory exceeds 4G, it is necessary to use CPU extended paging (PAE) mode to provide 64bit also directory entries to access higher physical memory, requiring CPU support. 3 Memory Access memory request the memory request that is mentioned below:
          • Malloc-free: User space
          • Kmalloc-kfree: Kernel space, physical continuous
          • __get_free_pages-free_pages: Kernel space, physical continuous
          • Vmalloc-vfree: Kernel space, physical discontinuity, virtual continuous
          • Slab:kmem_cache_create-kmem_cache_destory
        1 User space memory dynamic application request space on the heap, need to be released by the applicant. Be careful to appear as paired as possible to avoid memory leaks. Note: The malloc common and system call implementations of C Linux. 2 kernel space Memory dynamic request Kmalloc () application memory is located in the physical memory map area and is physically contiguous. And the real physical address has only one fixed offset.size, flag is a flag, and Gfp_kernel indicates that memory is requested in the kernel space process. Its underlying dependency implementation. After using this flag, if it is not satisfied, the process will sleep on the waiting page and may cause blocking. Therefore, you cannot use Gfp_kernel in spin lock++ to request memory in the + + interrupt context. In the + + interrupt handler function, Tasklet, the kernel timer + + non-process context cannot be blocked, the memory should be requested with gfp_atomic, and no idle will be returned directly. The corresponding other flag bits are defined in: Include/linux/gfp.h. Free up space. __get_free_pages () The method of acquiring space at the bottom of the Linux kernel. The bottom layer manages free memory in page 2^n, so the request for memory pages is in page.a new page that points to a zeroing. pointing to a new page but not zeroing in is actually using the next function with order 0. get multiple Pages The number is 2^order, not zeroed. Order Maximum is 10 or 11, hardware dependent. The implementation of the previous three functions is actually called, the function can be used in the user space, or in kernel space. Return. Release: pay special attention to the same order before and after. Vmalloc () A contiguous area is obtained in the virtual space, and in the Vmalloc dedicated area, physical memory is not necessarily contiguous. Used for large sequential buffers to allocate memory, much more expensive than GFP, and new page tables to be built. It is inappropriate to use it for a small amount of memory.   Release slab  is prone to internal fragmentation (internal fragmentation) in page units. At the same time, it is envisaged that if the same object can be allocated in the same piece of memory for two times, and the data structure that has been retained, it will improve efficiency. Get the slab concept, which resides in any number of back caches of the same size.   Created: struct Kmem_cache *kmem_cache_create (const char *name, size_t size,//size for each of the sizes of the large structure byte size_t align, unsigned Long flags, Viod (*ctor) (void*, struct kmem_cache *, unsigned long), void (*dtor) (void*, struct kmem_cache *, unsigned lo ng)); '
          • Allocate slab cache: void *kmem_cache_alloc (struct kmem_cache *cachep, gfp_t flags);//Divide a piece in the previously assigned Slab and return the first pointer
            • Release slab cache: void Kmem_cache_free (struct kmem_cache *cachep, void *OBJP);
              • Reclaim the entire slab:int Kmem_cache_destroy (struct kmem_cache *cachep);
                • You can learn about the allocation usage of slab. Note that the slab bottom is also dependent on, just split the small unit to reduce internal fragmentation for easy management. A memory pool is also a back-caching technique that allocates a large number of small objects. The correlation function has,,,. Virtual addresses and physical addresses use the transformation of the kernel virtual to the physical address, and the function implementation is related to the architecture. Physical to Virtual. Applies only to conventional memory areas. Reference [1] Level three page table, http://blog.csdn.net/myarrow/article/details/8624687 [2] high-end memory, http://ilinuxkernel.com/?p=1013 Notificationsource: "Linux device Driver Development Details" (second edition), reading notes and web materials, some of the original source of information is unknown, sharing for the convenience of their own and others to check. If there is infringement please timely inform, for the inconvenience caused very sorry. Please indicate the source of the reprint. Terrence Zhou.

How to manage Linux memory

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.