The difference between cached and buffers in the "Linux" free command

Source: Internet
Author: User

One, the order
[[email protected] ~]# free-m       Total used       free     shared    buffers     cachedmem:          7869       7651        218          1        191       5081-/+ buffers/cache:       2378 5490       Swap:          478        139        339
Second, the calculation

This uses 1, 2 to represent the first row and the second row of data, respectively

Total1: Represents the total amount of physical memory used1: represents the amount that a total is allocated to the cache (including buffers and cache), but may not actually use free1 in some of the caches: unallocated memory shared1: Shared memory, general system not used, There is also no discussion of buffers1: the number of buffers that the system allocates but not used cached1: The number of caches that are allocated but not used USED2: The total amount of buffers and caches actually used, and also the total amount of memory used Free2: not The sum of the buffers used with the cache and unallocated memory, which is the actual memory available to the system currently

You can sort out the following equation

Total1 = used1 + free1total1 = used2 + free2used1 = buffers1 + cached1 + used2free2 = buffers1 + cached1 + free1

Specific calculations

7869 = 7651 + 2187869 = 2378 + 5490  #7868基本相等, because there is shared) 7651 = 191 + 5081 + 2378 #7650 basically equal because there is shared) 5490 = 191 + 508 1 + 218

Why this calculation, because buffers and the cache is actually part of the memory, this part of the special memory can be recycled, even if necessary we can also release this part of the buffers and cache

Three, the difference 1. Page cahe and buffer cache

The page cache is actually for the file system, the file cache, and the data at the file level is cached to Page cache. The logical layer of the file needs to be mapped to the actual physical disk, and this mapping is done by the file system. When the page cache's data needs to be refreshed, the data in the page cache is given to buffer cache, but this processing is simple after the 2.6 kernel, with no real cache operation.

buffer cache is a caching of disk blocks that, in the absence of a filesystem, caches data directly into the buffer cache, for example, the file system's metadata is cached in buffer cache.
in short, page cache is used to cache file data, and buffer cache is used to cache disk data. In the case of a file system, the data is cached to the page cache, and if the disk is read and written directly using tools such as DD, the data is cached to buffer cache.

To Add, a def_blk_ops file operation method is assigned to each device at the filesystem level, which is how the device operates, and there is a radix tree underneath each device's inode, which will place the page of the cached data under the radix tree. The page number will be displayed in the buffer column of the top program. If the device makes a file system, it generates an inode, which allocates operations such as Ext3_ops, which is the method of the file system, where there is also a radix tree under the inode, which caches the page of the file. The number of cached pages is counted in the cache column of the top program. As can be seen from the above analysis, the buffer cache and page cache in the 2.6 kernel are consistent in processing, but there is a conceptual difference between the page cache Cache,buffer for the file is the cache for the disk block data, that's all.

2. The difference between cache and buffer

A buffer is something that have yet to being "written" to disk. A cache is something that have been "read" from the disk and stored for later use; For shared memory, which is primarily used to share data between different processes in a UNIX environment, is a way to communicate between processes, and general applications do not request the use of shared memory

Cache: Caching is a small but high-speed memory that sits between the CPU and the main memory. Because the CPU speed is much higher than the main memory, the CPU accesses the data directly from the memory to wait for a certain period of time, the cache holds the CPU just used or recycled part of the data, when the CPU re-use the part of the data can be directly called from the cache, which reduces the CPU waiting time, Improve the efficiency of the system. The cache is also divided into one-level cache (L1 cache) and level two cache (L2 cache), L1 cache is integrated within the CPU, L2 cache is usually soldered to the motherboard, and is now integrated into the CPU, with a common capacity of 256KB or 512KB L2 Cache

It is designed according to the local principle of the program, that is, the CPU execution of instructions and access to the data is often in a certain piece of the concentration, so put this piece of content into the cache, the CPU will not be accessing memory, which improves access speed. Of course, if the cache does not have the content required by the CPU, or to access the memory

View CPU L1, L2, L3

[[email protected] ~]# Ll/sys/devices/system/cpu/cpu0/cache/total 0drwxr-xr-x 2 root root 0 Jan-22:49 index0 #一级cache Data and instruction Cachedrwxr-xr-x 2 root root 0 Jan 22:49 index1 #一级cache中的data和instruction cachedrwxr-xr-x 2 root Roo T 0 Jan 22:49 Index2 #二级cache, shared drwxr-xr-x 2 root root 0 Jan 22:49 index3 #三级cache, shared

Buffer: An area where data is transferred between devices that are not synchronized or that have different priority levels. Through buffers, you can reduce the number of waits between processes, so that when you read data from a slow device, the operating process of a fast device is uninterrupted.

3. Buffer and cache in free (they are all memory- based )

Buffer: Memory as buffer cache, which is the read and write buffer of the block device

Cache: As the page cache memory, the file system cache

If the cache has a large value, it indicates that the cache has a high number of files. If frequently accessed files can be cache, then the disk read IO must be very small

How to release the cache Memory

To free Pagecache:echo 1 >/proc/sys/vm/drop_cachesto free dentries and Inodes:echo 2 >/proc/sys/vm/drop_cachesto F Ree Pagecache, Dentries and Inodes:echo 3 >/proc/sys/vm/drop_caches# Note that it is best to sync before releasing to prevent data loss, but in general there is no need to manually free up memory
4. Summarycached is between the CPU and the memory, buffer is between the memory and the disk, is to solve the problem of speed is not equal
    • Cache (cached) is to save the read data, re-read if hit (find the required data) do not read the hard disk, if not hit the hard drive. The data will be organized according to the frequency of reading, the most frequently read content in the most easily found location, the content of the no longer read to the back row, until removed from

    • Buffer (buffers) is based on the disk read-write design, the decentralized write operations centralized, reduce disk fragmentation and hard disk repeatedly seek, thereby improving system performance. Linux has a daemon that periodically empties the buffered content (that is, writes to disk) or manually empties the buffer via the Sync command. For example: I have a ext2 u disk here, I went to the inside CP a 3M MP3, but the U disk's light does not beat, after a while (or manually input sync) u disk light will beat up. The buffer is emptied when the device is uninstalled, so there are times when uninstalling a device waits a few seconds

    • Modifying the number to the right of vm.swappiness in/etc/sysctl.conf can adjust the swap usage policy at the next boot. The number range is 0~100, and the larger the number, the more likely it is to use swap. The default is 60, you can change it to try. – Both are data in RAM

The buffer is about to be written to the disk, and the cache is read from the disk .
    • Buffer is allocated by various processes and is used in areas such as input queues. A simple example is when a process requires multiple fields to be read in, and before all fields are read into full, the process saves the previously read-in fields in buffer

    • The cache is often used on disk I/O requests, and if more than one process accesses a file, the file is then made into the cache for next access, which improves system performance

    • Buffer Cachebuffer Cache, also known as Bcache, wherein the name of the buffer cache buffer memory, referred to as buffer high slow. In addition, the buffer cache in accordance with its working principle, also known as Block high

The difference between cached and buffers in the "Linux" free command

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.