Difference between buffer and Cache

Source: Internet
Author: User

Cached stores the read data. When you read the data again, you do not need to read the hard disk if it hits (find the required data). If it does not hit the hard disk. The data is organized according to the read frequency, and the most frequently read content is placed in the most easily located location. The unread content is arranged until it is deleted.
Buffers is designed based on disk read/write. Distributed write operations are performed in a centralized manner to reduce disk fragments and repeated seek paths on the hard disk, thus improving system performance. In Linux, a daemon regularly clears the buffer content (that is, writes to the disk). You can also use the sync command to manually clear the buffer. For example, I have an ext2 USB flash drive. I CP A 3 m MP3 file, but the USB flash drive does not beat. After a while (or manually enter sync) the USB flash drive is beating. The buffer is cleared when the device is detached, so it takes several seconds to detach the device.
Modify the number on the Right of VM. swappiness in/etc/sysctl. conf to adjust the swap Usage Policy at next boot. The value range is 0 ~ 100, the larger the number, the more inclined to use swap. The default value is 60. You can try it again. -Both are data in Ram.


In short, the buffer is about to be written to the disk, and the cache is read from the disk.


Buffer is allocated by various processes and is used in the input queue. A simple example is that a process requires multiple fields to be read. Before all fields are read completely, the process stores the previously read fields in the buffer.


The cache is often used in disk I/O requests. If multiple processes need to access a file, the file is made into a cache to facilitate next access, this improves the system performance.

Buffer cachebuffer cache, also known as bcache. Its Chinese name is buffer high-speed buffer memory, or buffer high-speed buffer. In addition, buffer cache is also known as block high latency based on its working principle.


Refer:
Http://baike.baidu.com/view/1113956.htm

[Root [email protected] libexec] # Cat/proc/meminfo
Memtotal: 255596 KB
Memfree: 105252 KB
Buffers': 62652 KB
Cached: 51580 KB

Swapcached: 0 KB
Active: 95560 KB
Inactive: 38324 KB
Hightotal: 0 KB
Highfree: 0 KB
Lowtotal: 255596 KB
Lowfree: 105252 KB
Swaptotal: 524280 KB
Swapfree: 524280 KB
Dirty: 36 KB
Writeback: 0 KB
Anonpages: 19668 KB
Mapped: 7372 KB
Slab: 11580 KB
Pagetables: 1384 KB
Nfs_unstable: 0 KB
Bounce: 0 KB
Commitlimit: 652076 KB
Committed_as: 62360 KB
Vmalloctotal: 770040 KB
Vmallocused: 3060 KB
Vmallocchunk: 766588 KB
Hugepages_total: 0
Hugepages_free: 0
Hugepages_rsvd: 0
Hugepagesize: 4096 KB

 

 

To understand Linux memory management, you need to have a deep understanding of the meanings and rules of Linux memory parameters. The following describes the differences between memory buffer and cache in Linux.

Free
Compared with top, the free command provides a more concise view of the system memory usage:

[[Email protected] ~] # Free-MT
Total used free shared buffers cached
Mem: 3886 3860 26 0 76 3016
-/+ Buffers/cache: 768 3118
Swap: 10236 0 10236
Total: 14123 3860 10262

Mem: physical memory statistics
-/+ Buffers/cached: indicates the cache statistics of physical memory.
Swap: indicates the usage of swap partitions on the hard disk.

The total physical memory of the system is 3886 MB, but the actually available memory B of the system is not the 26 MB marked as free in the first line. It only represents the unallocated memory.

We use names such as total1, used1, free1, used2, and free2 to represent the values of the preceding statistics. 1 and 2 represent the data of the first and second rows respectively.

Total1: total physical memory.
Used1: indicates the total quantity allocated to the cache (including buffers and cache), but some of the caches are not actually used.
Free1: unallocated memory.
Shared1: shared memory, which is not used by the general system and is not discussed here.
Buffers1: Number of buffers allocated by the system but not used.
Cached1: Number of cache allocated by the system but not used. The difference between buffer and cache is described later.
Used2: the total amount of actually used buffers and cache, which is also the total amount of actually used memory.
Free2: The sum of unused buffers, cache, and unallocated memory, which is the actual available memory of the system.

The following equations can be sorted out:
Total1 = used1 + free1total1 = used2 + free2used1 = buffers1 + cached1 + used2free2 = buffers1 + cached1 + free1

Difference between buffer and Cache
A buffer is something that has yet to be "written" to disk. A cache is something that has been "read" from the disk and stored for later use.

For more details, see difference between buffer and cache.

Shared Memory is mainly used to share data between different processes in UNIX environments. It is a method for inter-process communication. Generally, applications do not apply for shared memory, I have not verified the effect of shared memory on the above equation. If you are interested, see: What is shared memory?

Differences between cache and buffer:
Cache: high-speed cache is a memory with a small capacity but high speed located between the CPU and the main memory. Because the CPU speed is much higher than the master memory, it takes a certain period of time for the CPU to directly access data from the memory. The cache stores part of the data that has just been used by the CPU or is used cyclically, when the CPU uses this part of data again, it can be called directly from the cache, which reduces the CPU wait time and improves the system efficiency. Cache is divided into Level 1 cache (L1 cache) and level 2 cache (L2 cache). L1 cache is integrated into the CPU, and L2 cache is usually soldered to the motherboard in the early stages, it is also integrated into the CPU. The common capacity is 256kb or 512kb L2 cache.

Buffer: a buffer used to transmit data between devices with Different Storage speeds or between devices with different priorities. Through the buffer zone, mutual waits between processes can be reduced, so that when reading data from a slow device, the operation process of the fast device will not interrupt.

Buffer and cache in free: (both occupy memory ):

Buffer: the memory used as the buffer cache. It is the read/write buffer of Block devices.
Cache: Memory Used as the page cache, the cache of the file system

If the cache value is large, the number of files in the cache is large. If files frequently accessed can be cached, the disk read Io BI will be very small.

 

 

 

The Linux and Windows we use are not the same. The top command may not actually use the memory. The second line of the free command is the actual memory used by the system. If you find that your memory is full of PHP-CGI, don't panic.

Page cache and buffer cache have always been two confusing concepts. On the Internet, many people are arguing about the differences between the two caches, at the end of the discussion, there has never been a unified and correct conclusion. During my work, the concepts of page cache and buffer cache have plagued me, but after careful analysis, these two concepts are actually very clear. If we can understand the nature of these two caches, we can be more comfortable in analyzing Io problems.

The page cache is actually for the file system, and is the File Cache. Data at the file level is cached in the page cache. The logic layer of the file needs to be mapped to the actual physical disk. This ing relationship is completed by the file system. When the data in the page cache needs to be refreshed, the data in the page cache is handed over to the buffer cache. However, this kind of processing becomes very simple after the kernel of version 2.6, and there is no real cache operation.

Buffer cache is the cache for disk blocks, that is, if no file system is available, data directly operated on the disk will be cached in the buffer cache. For example, the metadata of the file system is cached in the buffer cache.
In short, page cache is used to cache file data, and buffer cache is used to cache disk data. In the case of a file system, operations on the file will cache the data to the page cache. If you directly use dd or other tools to read and write the disk, the data will be cached to the buffer cache.
Add one pointIn the file system layer, each device is assigned a file operation method of def_blk_ops. This is the operation method of the device. A radix tree exists under the inode of each device, the page of the cached data will be placed under this Radix tree. The number of pages is displayed in the buffer column of the top program. If the device has a file system, an inode will be generated, and the inode will allocate operation methods such as ext3_ops. These methods are the file system methods, there is also a radix tree under this inode, where the page of the file is cached, and the number of cached pages is calculated in the cache column of the top program. From the above analysis, we can see that the buffer cache and page cache in the 2.6 kernel are consistent in terms of processing, but there are conceptual differences. Page cache is for File Cache, buffer is the cache for disk block data.

Isn't it all about page cache? Buffer pages are actually pages in the page cache. It only adds an abstraction layer and uses buffer_head to perform some access management.
Right, from the perspective of Linux Algorithm Implementation, the page cache and buffer cache are the same, but there are still differences between the two in terms of functional abstraction and specific applications, this can be seen from the statistics of the top tool. Pay attention to the two statistics of buffer and cache.
Add some materials:
A buffer is something that has yet to be "written" to disk. A cache is something that has been "read" from the disk and stored for later use.
Enter: free in the terminal
Display:Total used free shared buffers cached
Mem: 255268 238332 16936 0 85540 126384
-/+ Buffers/cache: 26408 228860
The total physical memory of the system is 255268kb (256 MB). However, the actual available memory of the system is not 16936kb in the first line. It only indicates the unallocated memory.
We use names such as total1, used1, free1, used2, and free2 to represent the values of the preceding statistics. 1 and 2 represent the data of the first and second rows respectively.
Total1: total physical memory.
Used1: indicates the total quantity allocated to the cache (including buffers and cache), but some of the caches are not actually used.
Free1: unallocated memory.
Shared1: shared memory, which is not used by the general system and is not discussed here.
Buffers1: Number of buffers allocated by the system but not used.
Cached1: Number of cache allocated by the system but not used. The difference between buffer and cache is described later.
Used2: the total amount of actually used buffers and cache, which is also the total amount of actually used memory.
Free2: The sum of unused buffers, cache, and unallocated memory, which is the actual available memory of the system.
The following equations can be sorted out:
Total1 = used1 + free1
Total1 = used2 + free2
Used1 = buffers1 + cached1 + used2
Free2 = buffers1 + cached1 + free1

Difference between buffer and Cache

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.