Linux page buffer cache in-depth understanding

Source: Internet
Author: User
Tags deprecated knowledge base

the output of the free command on Linux. here is the result of the free run, with a total of 4 rows. For convenience, I added a column number. This can be seen as a two-dimensional array fo (free output). For example:
    • FO[2][1] = 24677460
    • FO[3][2] = 10321516
1 2 3 4 5 6
1 total used free shared buffers Cached
2 mem:24677460 23276064 1401396 0 870540 12084008
3-/+ buffers/cache:10321516 14355944
4 swap:25151484 224188 24927296

The output of free has a total of four lines, the fourth behavior Swap area information, which is the total amount of the exchange, the amount of usage (used), and the number of idle swap areas, which are relatively clear, not too much.

The second and third rows in the free output are confusing. Both of these lines describe memory usage. The first column is total, the second column is the usage (used), and the third column is the available amount (free).

The first line of output is viewed from the operating system (OS). That is, from the OS point of view, the computer has a total of:

    • 24677460KB (the default is KB for free) physical memory, or fo[2][1];
    • 23276064KB (i.e. fo[2][2]) is used in these physical memory;
    • Also used 1401396KB (i.e. fo[2][3]) is available;

Here we get the first equation:

    • FO[2][1] = fo[2][2] + fo[2][3]

FO[2][4] indicates that the memory that is shared by several processes is now deprecated, and its value is always 0 (it may not be 0 on some systems, depending on how the free command is implemented).

FO[2][5] represents the memory that is trapped by the OS buffer. FO[2][6] Indicates the memory of the OS cache. In some cases, the two words buffer and cache are often mixed. But in some low-level software is to distinguish between the two words, look at the foreigner's Foreign Language:

    • A buffer is something that have yet to being "written" to disk.
    • A cache is something that have been "read" from the disk and stored for later use.

That is, buffer is used to store the data to be output to disk (block device), and the cache is to store the data read from disk. Both are designed to improve IO performance and are managed by the OS.

Linux and other mature operating systems (such as Windows), in order to improve the performance of IO read, always have to cache some data, which is why Fo[2][6] (cached memory) is relatively large, and fo[2][3] relatively small reason.

The output from the second line looks at the system memory usage from an application perspective.

    • For fo[3][2], or Fo[2][2]-buffers/cache, indicates how much memory an application thinks the system is using;
    • For fo[3][3], or Fo[2][3]+buffers/cache, indicates how much memory an application thinks the system has;

Because the memory consumed by the system cache and buffer can be quickly recycled, fo[3][3] is usually much larger than fo[2][3].

It also uses two equations:

    • FO[3][2] = fo[2][2]-fo[2][5]-fo[2][6]
    • FO[3][3] = fo[2][3] + fo[2][5] + fo[2][6]
the difference between buffer and cache
A buffer is something that have yet to being "written" to disk. A cache is something that have been "read" from the disk and stored for later use. Both are data in RAM. In short, the buffer is about to be written to the disk, and the cache is read out of the disk. both are designed to improve IO performance and are managed by the OS, not by applying their own allocated memory, but by the OS's own additional use of free memory as needed. Because this part is just cache, reduce IO, improve performance, as long as the application needs, the OS can directly write buffer to disk, delete the cache to get free memory for the application to use.
Buffer is an area where data is transferred between devices that are not synchronized, or devices with different priority levels. Buffer (buffers) is based on the disk read-write design, the decentralized write operations centralized, reduce disk fragmentation and hard disk repeatedly seek, thereby improving system performance.
The cache is often used on disk I/O requests, and if more than one process accesses a file, the file is made into a cache for next access, which provides system performance. Cache (cached) is to save the read data, re-read if hit (find the required data) do not read the hard disk, if not hit the hard drive. The data will be organized according to the frequency of reading, the most frequently read content in the most easily found in the location, the content is no longer read to the back row, until removed from it. therefore:-/+ Buffers/cache means that using memory is the sum of the actual current use of memory minus buffers/cache; free memory is the sum of the actual free memory plus buffers/cache. So it's-/+ .when you look at the free memory and determine if the app has a memory leak, you can only use the third action on the second line, which actually does little, but you can see the current buffer and cache size of the OS.

Page cache and buffer cache have always been two more confusing concepts, and there are a lot of people on the internet arguing and guessing what the difference is between the two caches, and the discussion finally has not been a unified and correct conclusion, during my work this time, page The concept of cache and buffer cache has bothered me once, but the two concepts are actually very clear when analyzed carefully. If you can understand the nature of these two caches, then we may be more handy when analyzing IO issues.
The page cache is actually for the file system, the file cache, and the data at the file level is cached to page cache. The logical layer of the file needs to be mapped to the actual physical disk, and this mapping is done by the file system. When the page cache's data needs to be refreshed, the data in the page cache is given to buffer cache, but this processing is simple after the 2.6 kernel, with no real cache operation.

Buffer cache is a caching of disk blocks that, in the absence of a filesystem, caches data directly into the buffer cache, for example, the file system's metadata is cached in buffer cache.
In short, page cache is used to cache file data, and buffer cache is used to cache disk data. In the case of a file system, the data is cached to the page cache, and if the disk is read and written directly using tools such as DD, the data is cached to buffer cache.

To add, a Def_blk_ops file operation method is assigned to each device at the filesystem level, which is how the device operates, and there is a radix tree underneath each device's inode, which will place the page of the cached data under the radix tree. The page number will be displayed in the buffer column of the top program. If the device makes a file system, it generates an inode, which allocates operations such as Ext3_ops, which is the method of the file system, where there is also a radix tree under the inode, which caches the page of the file. The number of cached pages is counted in the cache column of the top program. As can be seen from the above analysis, the buffer cache and page cache in the 2.6 kernel are consistent in processing, but there is a conceptual difference between the page cache Cache,buffer for the file is the cache for the disk block data, that's all.

The difference between buffer and cache
A buffer is something that have yet to being "written" to disk. A cache is something that have been "read" from the disk and stored for later use.
For a more detailed explanation reference: difference between Buffer and Cache
For shared memory, which is primarily used to share data between different processes in a UNIX environment, is a method of interprocess communication, and the general application does not request the use of shared memory, nor does the author verify the effect of shared memory on the above equation. If you are interested, please refer to: What is Shared Memory?

The difference between cache and buffer:
Cache: Caching is a small but high-speed memory that sits between the CPU and the main memory. Because the CPU speed is much higher than the main memory, the CPU accesses the data directly from the memory to wait for a certain period of time, the cache holds the CPU just used or recycled part of the data, when the CPU re-use the part of the data can be directly called from the cache, which reduces the CPU waiting time, Improve the efficiency of the system. The cache is also divided into one-level cache (L1 cache) and level two cache (L2 cache), L1 cache is integrated within the CPU, L2 cache is usually soldered to the motherboard, and is now integrated into the CPU, with a common capacity of 256KB or 512KB L2 Cache.
Buffer: An area where data is transferred between devices that are not synchronized or that have different priority levels. Through buffers, you can reduce the number of waits between processes, so that when you read data from a slow device, the operating process of a fast device is uninterrupted.

Buffer and cache in free: (They are all memory-intensive):
Buffer: Memory as buffer cache, which is the read and write buffer of the block device
Cache: As the page cache memory, the file system cache
If the cache has a large value, it indicates that the cache has a high number of files. If frequently accessed files can be cache, then the disk read IO must be very small.

the output of the free command on Linux. here is the result of the free run, with a total of 4 rows. For convenience, I added a column number. This can be seen as a two-dimensional array fo (free output). For example:
    • FO[2][1] = 24677460
    • FO[3][2] = 10321516
1 2 3 4 5 6
1 total used free shared buffers Cached
2 mem:24677460 23276064 1401396 0 870540 12084008
3-/+ buffers/cache:10321516 14355944
4 swap:25151484 224188 24927296

The output of free has a total of four lines, the fourth behavior Swap area information, which is the total amount of the exchange, the amount of usage (used), and the number of idle swap areas, which are relatively clear, not too much.

The second and third rows in the free output are confusing. Both of these lines describe memory usage. The first column is total, the second column is the usage (used), and the third column is the available amount (free).

The first line of output is viewed from the operating system (OS). That is, from the OS point of view, the computer has a total of:

    • 24677460KB (the default is KB for free) physical memory, or fo[2][1];
    • 23276064KB (i.e. fo[2][2]) is used in these physical memory;
    • Also used 1401396KB (i.e. fo[2][3]) is available;

Here we get the first equation:

    • FO[2][1] = fo[2][2] + fo[2][3]

FO[2][4] indicates that the memory that is shared by several processes is now deprecated, and its value is always 0 (it may not be 0 on some systems, depending on how the free command is implemented).

FO[2][5] represents the memory that is trapped by the OS buffer. FO[2][6] Indicates the memory of the OS cache. In some cases, the two words buffer and cache are often mixed. But in some low-level software is to distinguish between the two words, look at the foreigner's Foreign Language:

    • A buffer is something that have yet to being "written" to disk.
    • A cache is something that have been "read" from the disk and stored for later use.

That is, buffer is used to store the data to be output to disk (block device), and the cache is to store the data read from disk. Both are designed to improve IO performance and are managed by the OS.

Linux and other mature operating systems (such as Windows), in order to improve the performance of IO read, always have to cache some data, which is why Fo[2][6] (cached memory) is relatively large, and fo[2][3] relatively small reason.

The output from the second line looks at the system memory usage from an application perspective.

    • For fo[3][2], or Fo[2][2]-buffers/cache, indicates how much memory an application thinks the system is using;
    • For fo[3][3], or Fo[2][3]+buffers/cache, indicates how much memory an application thinks the system has;

Because the memory consumed by the system cache and buffer can be quickly recycled, fo[3][3] is usually much larger than fo[2][3].

It also uses two equations:

    • FO[3][2] = fo[2][2]-fo[2][5]-fo[2][6]
    • FO[3][3] = fo[2][3] + fo[2][5] + fo[2][6]
the difference between buffer and cache
A buffer is something that have yet to being "written" to disk. A cache is something that have been "read" from the disk and stored for later use. Both are data in RAM. In short, the buffer is about to be written to the disk, and the cache is read out of the disk. both are designed to improve IO performance and are managed by the OS, not by applying their own allocated memory, but by the OS's own additional use of free memory as needed. Because this part is just cache, reduce IO, improve performance, as long as the application needs, the OS can directly write buffer to disk, delete the cache to get free memory for the application to use.
Buffer is an area where data is transferred between devices that are not synchronized, or devices with different priority levels. Buffer (buffers) is based on the disk read-write design, the decentralized write operations centralized, reduce disk fragmentation and hard disk repeatedly seek, thereby improving system performance.
The cache is often used on disk I/O requests, and if more than one process accesses a file, the file is made into a cache for next access, which provides system performance. Cache (cached) is to save the read data, re-read if hit (find the required data) do not read the hard disk, if not hit the hard drive. The data will be organized according to the frequency of reading, the most frequently read content in the most easily found in the location, the content is no longer read to the back row, until removed from it. therefore:-/+ Buffers/cache means that using memory is the sum of the actual current use of memory minus buffers/cache; free memory is the sum of the actual free memory plus buffers/cache. So it's-/+ .when you look at the free memory and determine if the app has a memory leak, you can only use the third action on the second line, which actually does little, but you can see the current buffer and cache size of the OS. Category: Linux, reprint tags: Linux memory occupancy see good text to the top of my collection this article dzqabc
Follow-2
Fans-260 + plus attention1 0? Previous article: "Go" in-depth understanding of C + + dynamic binding and static binding & do not redefine default parameters in virtual functions
? Next: Proxy automatic configuration script posted @2014-06-15 10:45DZQABC Reading (6201Reviews0) Edit Favorites

Refresh Comments Refresh page return top comment

Nickname:

Comment Content:<textarea id="tbCommentBody" class="comment_textarea"></textarea>

Quit subscription Comments

[Ctrl+enter shortcut key submission]

"Recommended" Super 500,000 VC + + Source: Large-scale industrial control, configuration \ Simulation, modeling CAD source code 2018!
"Recommended" small program one-stop deployment multi-scene template customization Latest IT News:
· Airbnb's "Experience" project is underperforming and losing $100 million
· Uber CEO: Losses are also invested in Southeast Asia and will not be ceded to SoftBank
· us to appoint Professor Shoucheng Zhang as an independent director to accelerate the blockchain landscape
· Amazon sells its own brand of over-the-counter drugs may enter the prescription drug market
· Network: No substantial reorganization plans and intentions have been formed
? More news ... Latest Knowledge Base articles:
· As a programmer, how important is math to you
· The practice of domain-driven design in Internet business development
· Step Into cloud computing
· Describing threads and processes in an operating system perspective
· The path of software testing transformation? More Knowledge Base articles ... Nickname: Dzqabc
Garden Age: 7 years 4 months
Fans: 260
Attention: The focus of the search

Linux page buffer cache in-depth understanding

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.