Page cache and buffer cache [go to 1]

Source: Internet
Author: User

page cache is actually for file systems, is the File Cache. Data at the file level is cached in the page cache. The logic layer of the file needs to be mapped to the actual physical disk. This ing relationship is completed by the file system. When the data in the page cache needs to be refreshed, the data in the page cache is handed over to the buffer cache. However, this kind of processing becomes very simple after the kernel of version 2.6, and there is no real cache operation.

In the Linux 2.6 kernel, page cache and buffer cache are further combined. Buffer pages are actually pages in page cache. From LinuxAlgorithmFrom the implementation point of view, the page cache and buffer cache are the same at present, but there is an extra abstraction layer, and some access management is implemented through buffer_head. It can be understood that only the page cache concept is acceptable. In the Linux 2.6 kernel, page cache and buffer cache are further combined. Buffer pages are actually pages in page cache. From the perspective of Linux Algorithm Implementation, page cache and buffer cache are the same at present, but there is a layer of abstraction while buffer_head is used for some access management. It can be understood that only the page cache concept is acceptable.

Standard IO:

In Linux, this file access method is implemented through two system calls: Read () and write (). When the applicationProgramWhen the read () system call is called to read a piece of data, if the data is already in the memory, the data is read directly from the memory and returned to the application; if this block of data is not in the memory, the data will be read from the disk to the high page cache, and then copied from the page cache to the user address space. If a process reads a file, other processes cannot read or modify the file. For data write operations, when a process calls write () when a system call writes data to a file, the data is first copied from the user address space to the page cache of the operating system kernel address space before being written to the disk. However, when the data is written to the page cache, the write () system call is completed and the data is not completely written to the disk. Linux uses the latency write mechanism (deferred writes) we mentioned earlier ). If the user uses the deferred writes mechanism, the application does not need to wait until all the data is written back to the disk, and the data can be written to the page cache. When the write delay mechanism is enabled, the operating system regularly refreshes data stored in the page cache to the disk.

Direct IO:

For data transmission through direct I/O, data is directly transmitted between the buffer zone of the user address space and the disk, without the support of page cache. The cache provided by the operating system layer often improves the performance of applications when reading and writing data. However, for some special applications, such as database management systems, they are more inclined to choose their own caching mechanism, because the database management system tends to better understand the data stored in the database than the operating system, the database management system can provide a more effective cache mechanism to improve the data access performance in the database.

In short, page cache is used to cache file data, and buffer cache is used to cache disk data. In the case of a file system, operations on the file will cache the data to the page cache. If you directly use dd or other tools to read and write the disk, the data will be cached to the buffer cache.

In addition, at the file system layer, each device is assigned a file operation method of def_blk_ops. This is the operation method of the device. A radix tree exists under the inode of each device, the page of the cached data will be placed under this Radix tree. The number of pages is displayed in the buffer column of the top program. If the device has a file system, an inode will be generated, and the inode will allocate operation methods such as ext3_ops. These methods are the file system methods, there is also a radix tree under this inode, where the page of the file is cached, and the number of cached pages is calculated in the cache column of the top program. From the above analysis, we can see that the buffer cache and page cache in the 2.6 kernel are consistent in terms of processing, but there are conceptual differences. Page cache is for File Cache, buffer is the cache for disk block data.

"If a process reads a file, no other process can read or modify the file." This statement has doubts.

Original

Http://blog.chinaunix.net/uid-1829236-id-3152172.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.