Linux Memory Source Analysis-memory Recycling (anonymous page reverse mapping) __linux

Source: Internet
Author: User
Tags anonymous data structures
Overview

After reading the memory compression, and recently looking at memory recycling this block of code, found that there are a lot of content, need to be divided into several pieces to elaborate, first of all, to say the reverse mapping of anonymous pages, anonymous pages mainly for the process address space heap, stack, as well as private anonymous shared memory (for the process of kinship), The linear areas to which these anonymous pages belong are called anonymous linear regions, which map only memory and do not map files on specific disks. The reverse mapping of anonymous pages has a significant effect on the collection of anonymous pages. For memory recycling, the kernel maintains 5 LRU linked lists for each memory page in the Zone management zone (most recently using a linked list), respectively: Lru_inactive_anon, Lru_active_anon, Lru_inactive_file, lru_ Active_file, lru_unevictable. Lru_inactive_anon: Saves the inactive anonymous page in the owning zone, each time from the head of the list, where the anonymous pages are moved from the Lru_active_anon list. This list length is generally the 25% of the number of anonymous pages of the owning zone Lru_active_anon: Save the active anonymous page in the owning zone, each time from the head of the list, when the number of Lru_inactive_anon is less than 25% of the zone, from Lru_ Active_anon the end of the list to move some pages to the Lru_inactive_anon list head. Lru_inactive_file: Saves the inactive file page in the owning zone, similar to Lru_inactive_anon. Lru_active_file: Saves the active file page in the owning zone, similar to Lru_active_anon. Lru_unevictable: Saves the pages that are not recycled in the owning zone, which are typically locked in memory through Mlock.

This article first does not detail these several LRU linked list, mainly first said the anonymous page reverse mapping, in Lru_inactive_anon and Lru_active_anon These two lists, the link is the physical page frame corresponding page descriptor. When a memory recycle is to be done, the memory recycle function scans the pages in the Lru_inactive_anon list, puts a portion of the pages in swap, and then releases the physical page box, and there is a problem that some processes have mapped the page to their page tables. If you want to talk about page swapping, you need to process page tables that map this page, and processes that map this page are often not just one. An anonymous page reverse mapping is a function in this scenario, it can pass through the page descriptor of the physical page box, locate all the anonymous linear extents VMA and the processes that are mapped to this page, and then, by modifying the page tables of these processes, mark that the page has been swapped out of memory, and then the processes can be processed when they access the page.

Data Structure

For reverse mapping, you need to say a few more data structures, which are memory descriptors struct mm_struct, linear region descriptors struct vm_area_struct, page descriptors struct pages, anonymous linear region descriptors struct ANON_VMA, and anonymous linear region node descriptor struct Anon_vma_chain.

Each process has its own memory descriptor struct MM_STRUCT, in addition to the kernel thread (using the mm_struct of the previous process), the lightweight process (using the mm_struct of the parent process). In this mm_struct, in the reverse mapping, we are more concerned about the following parameters:

/* Memory descriptor, each process will have one, in addition to kernel threads (using the mm_struct of the dispatched process) and lightweight processes (using the parent process's mm_struct) * * All memory descriptors are stored in a two-way list, the first element in the list is init_mm, It is the memory descriptor of the initialization phase process 0/struct Mm_struct {/* The linked table header of the linear region object, the list is sorted by linear address ascending order, which includes the anonymous mapping linear region and the file mapping linear region * * * struct VM_AREA_ST        Ruct *mmap;
    /* List of VMAs *////////////////////////////////////////////////////////////////                   U32 Vmacache_seqnum; /* Per-thread Vmacache/#ifdef CONFIG_MMU/* Find an available linear address space in the process address space to find an idle address range * Len: Specify the length of the interval * return to the starting place between the new zones Address */unsigned long (*get_unmapped_area) (struct file *filp, unsigned long addr, unsigned long len
, unsigned long pgoff, unsigned long flags);        #endif/* Identifies the first assigned anonymous linear region or file memory mapping of the linear address * * unsigned long mmap_base;         /* Base of mmap area */unsigned long mmap_legacy_base;        /* Base of mmap area in bottom-up allocations/unsigned long task_size; /* Size of Task VM space *////* Maximum end address in all VMA* * unsigned long highest_vm_end;
    /* Highest VMA End address////* point to the page global catalog * * pgd_t * PGD;        * * Times use counters, storing the number of lightweight processes sharing this mm_struct, but all mm_users in mm_count calculation only counted as 1 * * * atomic_t mm_users;        /* Initial 1 *////////* Main use counter, when Mm_count, the system will check whether 0, 0 is to lift the mm_struct * * * atomic_t mm_count;            /* Initial 1 * * * * Page Table number * * * atomic_long_t nr_ptes;        /* Number of VMAs *////////* Linear area of the spin lock and page table spin lock/spinlock_t Page_table_lock;        /* used to link into a two-way list/struct list_head mmlist;    /* List of maybe swapped mm ' s.
                         These are globally strung * together off init_mm.mmlist, and are protected * by Mmlist_lock * * The maximum page owned by the processBox Number */unsigned long hiwater_rss;    /* High-watermark of RSS usage////* Maximum number of pages in the process linear area/* unsigned long HIWATER_VM;        /* High-water Virtual Memory usage/* Process address space Size (page frame number) * * unsigned long TOTAL_VM;    /* Total pages Mapped///////////* unsigned long LOCKED_VM of the number of pages that cannot be swapped out;    /* Pages that have pg_mlocked set */unsigned long PINNED_VM;    /* RefCount permanently increased/* * Number of pages in shared file memory map * * unsigned long SHARED_VM;        /* Shared Pages///* The number of pages in the executable memory map/* unsigned long EXEC_VM;        /* Vm_exec & ~vm_write *///////////* User Stack page number/unsigned long STACK_VM;
    
    /* Vm_growsup/down/unsigned long def_flags;
    /* Start_code: Start position of executable code * End_code: Last position of executable code * Start_data: Starting position of initialized data * End_data: Last position of initialized data * * *
    
    unsigned long start_code, End_code, Start_data, End_data; /* START_BRK: Start position of heap * BRK: Current last address of Heap * Start_stack: Start address of user state stack/unsigned longSTART_BRK, BRK, Start_stack;
    /* Arg_start: Start position of command line arguments * Arg_end: Last position of command line arguments * Env_start: Start position of environment variable * Env_end: Last position of environment variable * *
 
unsigned long arg_start, arg_end, Env_start, env_end;
#ifdef CONFIG_MEMCG/* Owned process * * struct task_struct __rcu *owner;
  File/struct file *exe_file for the executable file mapped in the code snippet #endif * *; ......

};

This should be noted in the mmap linked list and mm_rb this red-black tree, a process of all the linear regions VMA will be linked to the process of mm_struct mmap linked lists and MM_RB red-black trees, both are in order to find the linear area VMA convenient.

Then look at the linear region VMA descriptor, the linear distinction between the anonymous mapping of the linear region and the file mapping linear region, as follows:

/* Describes the linear region structure * The kernel tries to merge the newly allocated linear region with the immediate existing linear region process.
 If two contiguous linear region access rights match, they can be merged together. * Each linear region has a set of consecutive numbers of pages (non-page box), and the page only when accessed by the system will generate page faults, in the exception of the allocation of pages/struct VM_AREA_STRUCT {* * linear area of the first linear address/unsigned lo        
    Ng Vm_start;        

    /* The first linear address outside the linear region */unsigned long vm_end;  /* The entire list will be sorted by address size///////////////Vm_next: The next linear region in the linear region/////////Vm_prev: The previous linear region in the linear-region list/struct vm_area_struct *vm_next,

    *vm_prev;


    /* The node of the red and black tree for the linear region where the current memory descriptor is organized * * struct rb_node vm_rb;

    * * The largest free memory block size (bytes) In this VMA subtree/unsigned long rb_subtree_gap;    
    /* point to the memory descriptor that belongs to/* struct mm_struct *vm_mm;        /* The initial value of the page table item flag, when a page is added, the kernel sets the label in the corresponding page table entry according to the value of the field. * * The user/supervisor mark in the page table should always be placed 1 */pgprot_t Vm_page_prot; /* Access permissions of this VMA. * Flags, mm.h. Unsigned long Rb_subtree_last;
        } linear;
    struct List_head nonlinear;

    } shared; * * points to a pointer to an anonymous linear area chain header, which links all the anonymous linear regions in the mm_struct * anonymous map_private, heap, and stack VMA will exist in this anon_vma_chain list if mm_s Truct's Anon_vma is empty, then its anon_vma_chain also must be empty * * * struct list_head anon_vma_chain;    

    struct ANON_VMA *anon_vma;

    /* The method that points to the linear region operation, the special linear area will set, the default will be empty/const struct VM_OPERATIONS_STRUCT *vm_ops; /* If this VMA is used for mapping files, then the offset in the mapping file is saved.        
    In the case of an anonymous linear region, it is equal to the virtual page frame number (Vm_start >> page_size) corresponding to the 0 or VMA start address, which is used to compute the inverse mapping (stack) of the VMA downward growth (stacks)/unsigned long vm_pgoff; /* The file object that points to the mapping file, or it may point to the struct file that is returned in the Shmem shared memory, if it is an anonymous linear region, this value is null or an anonymous file (this anonymous file is related to swap?) * * * * struct file * V        
    M_file;        /* Private data to point to memory/void * VM_PRIVATE_DATA;    
/* was Vm_pte (shared mem)/... #ifndef config_mmu struct vm_region *vm_region; #endIf #ifdef config_numa struct mempolicy *vm_policy; #endif};

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.