Computer knowledge supplements (vi) Understanding page Caching pages cache and address space address_space

Source: Internet
Author: User

In this computer bottom-level knowledge supplements (V) understands the block IO layer in the block buffer cache buffer cache, this article says the page caches pages cache as well as the related address space address_space points.

In the Linux 2.4 kernel, the block cache buffer caches and page caches are coexisting, and the behavior is that the data of the same file may appear in buffer cache and in the page cache, resulting in a waste of physical memory. The Linux 2.6 kernel incorporates two caches and uses the page cache in a single cache, with very few cases to buffer cache. The difference between buffer cache and page cache will be said later. Let's look at how the capacity of both is counted.

Memory usage of the current system is stored in/proc/meminfo, such as the following example,

Buffers indicates the capacity of the buffer cache

Cached represents a page cache in physical memory page caches

Swapcached represents the page cache that is located in the disk swap page

So the actual page caches the capacity of the page cache = Cached + swapcached


The difference between buffer cache and page cache


The buffer cache is the primary cache component in UNIX and the early Linux kernel. we have to understand that both the buffer cache and the page cache are designed to handle high-speed access issues when block devices and memory interactions

1. The buffer cache is for the underlying block device, so its granularity is a block of file systems, and block devices and systems interact with blocks. The block is then converted to the basic physical structure sector of the disk. The size of the sector is 512KB, and the file system block is generally 2KB, 4KB, 8KB. Can be quickly converted between sectors and blocks

With the function of kernel becoming more and more perfect, the cache of block granularity can't meet the need of performance. The kernel's memory management component uses a higher level of abstraction than the file system's block, page pages, which are typically 4KB to 2MB in size, with greater granularity and higher processing performance. So the cache component creates a page cache to replace the original buffer cache in order to interact better with the memory management component.

The page cache is file-oriented and memory-oriented. Through a series of data structures, such as inode, Address_space, page, map a file to the level of the page, you can navigate to the specific location of a file with Page + offset

2. When the buffer cache is actually operated by block as the basic unit, page cache operates as a basic unit, creating a new bio abstraction that can handle multiple noncontiguous page IO operations at the same time, known as the Scatter/gather IO

3. Buffer cache is currently used primarily in scenarios where a block is required, such as the reading and writing of the Super block. The page cache can be used in all file-unit scenarios, such as the network file system, and the cache component abstracts the concept of address space address_space as an intermediate adapter for file system and page caching, shielding the details of the underlying device

4. The buffer cache can be integrated with the page cache, and the block cache belonging to a page is organized using the Buffer_head list, Page_cache maintains a private pointer to the Buffer_head linked list, The Buffer_head list maintains a pointer to this page. This only requires storing one copy of the data in the page cache

5. The inode of the file system actually maintains the block number of all block blocks of this file, which can be quickly positioned to the block number of the file system where the offset is located, by the disk's sector code. Similarly, by modulo offset offset, you can calculate the offset of the page on which the offset is located, and the address space address_space can easily get information about both Inode and page by pointer. So it is easy to locate the offset of a file in each component's position:

file byte offset-page offset-file system block number block --and disk sector number


Pages Cache page caches and address space address_space


Compared to the page cache and buffer cache, the basic features of the page cache, it is oriented to memory, file-oriented. This is exactly what the page cache does, which is between the memory and the file, and the file IO operation actually interacts only with the page cache, not directly with the memory .

The Linux kernel uses the page data structure to describe the physical memory page frames, and the kernel creates a Mem_map array to represent all the physical page frames, and the Mem_map array item is the page.

The page structure represents not only the physical memory pages frame,

1. Flags flags to indicate whether the page is dirty, is being written back, and so on

2. _count, _mapcount indicates how many processes are used and mapped by this page

3. The private pointer points to the Buffer_head linked list of the buffer cache for this page, establishing a link between the page cache and the block cache.

4. Mapping points to the address space Address_space, which indicates that this page is a page cache page, and that the address space of a file corresponds to

5. Index is the page offset of the page in the file, which calculates the page offset of the file by the byte offset of the file



The page cache is essentially a cardinality tree structure that organizes the contents of a file into the physical memory page. File IO operations interact directly with the page cache. Use caching principles to manage IO operations on block devices


A file inode corresponds to an address space of address_space. And a address_space corresponds to a page cache cardinality tree. The relationships of these components are as follows



Then look at the concept of address space address_space. Address_space is a key abstraction in the Linux kernel, which is a bridge between page caches and file systems in external devices, which can be said to be associated with memory systems and file systems , which can be understood as data sources.

1. The inode points to the host of this address space, which is the data source

2. Page_tree points to the base tree for the page cache corresponding to this address space. This lets you find the page cache page for a file by using the Inode----address_space--Page_tree




when a file is read , the page to be read is first calculated by offset of the file content to be read, and then the file's inode is found for the address space address_space, and then the Address_ Space in the page cache to access the file, if the page cache hit, then directly return the contents of the file, if the page cache is missing, then produce a page missing exception, start a page cache page, and then read the corresponding file from the disk page to populate the cache page, after rent from the page missing exception recovery, continue to read.

When writing a file, first the corresponding page is calculated by the offset of the content in the file, then the address_space is found through the Inode, the page cache page is found through Address_space, and if the page cache hits, the file content modification is updated directly on the page cache page. The writing of the document is over. At this time the file modification is located in the page cache and is not written back to writeback to the disk file.

Pages in a page cache are marked as dirty pages if they are modified. Dirty pages need to be written back to a block of files on disk. There are two ways to write dirty pages back to disk, that is, flush.

1. Manually call the sync () or fsync () system call to write the dirty page back

2. The Pdflush process will periodically write dirty pages back to disk


Dirty pages cannot be swapped out of memory, and if the dirty page is being written back, it will be set to write back the tag, when the page is locked, other write requests are blocked until the lock is released


About file IO We often say two words "normal file IO needs to be replicated two times, memory mapped file mmap copy once", "normal file Io is in-heap operation, memory-mapped file is out-of-heap operation". Let's take a look at these two words.

For ordinary files to be copied two times, we have to understand exactly what two times, most of the books are not clear, only that the first copy is from the disk to the memory buffer, the second time is from the memory buffer to the heap of the process. the memory buffer here is actually the page cache.

This document page Cache, the affair between Memory and files in a few pictures are very image, said to understand the actual occurrence of the underlying operation.


Join a process render to read a Scene.dat file, the actual steps that occur are as follows

1. The render process initiates a request to the kernel to read the Scene.dat file

2. The kernel finds the corresponding address_space based on the inode of the Scene.dat, finds the page cache in Address_space, and if not found, allocates a memory page to page cache

3. Read the Scene.dat file from disk The corresponding page fills the page in the page cache, that is, the first copy

4. Copy content from page-cached pages into memory in the heap space of the render process, that is, the second copy



The contents of the last physical memory are such that there are two copies of the contents of the same file Scene.dat, one is the page cache, and the other is the physical memory space corresponding to the heap space of the user process.



Take a look at the memory-mapped file mmap only copy once, mmap only one page cache copy, from the disk file to the cache.

Mmap creates a virtual memory area vm_area_struct, the task_struct of the process maintains all the virtual memory area information for the process, and the virtual memory area updates the corresponding Process page table entries so that the page table entries point directly to the physical page where the page cache resides. Mmap This virtual memory area is not the same as the virtual memory area of the process heap, so mmap is in the outer space of the heap.


Finally, several concepts are clarified

1. The user process accesses memory only through the page table structure, the kernel can access the physical memory directly through the virtual address.

2. The user process cannot access the kernel address space, where the address space refers to the virtual address space, which is certain, because the virtual address space of the user process and the virtual address space of the kernel is not coincident, the kernel virtual address space must be privileged access

3. The page structure represents the physical memory page frame, and the same physical memory address can be accessed by both the kernel process and the user process, as long as the page table entry for the user process points to the physical memory address. That is the principle of mmap implementation.


Resources:

Page Cache, the affair between Memory and Files

"Linux Kernel:what is the major difference between the buffer cache and the page cache?"

"Deep Linux kernel Architecture"


Computer knowledge supplements (vi) Understanding page Caching pages cache and address space address_space

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.