Cache & buffers in linux top Command
Today, when we use top to view the usage of system resources by specific processes of the system, we are not very clear about the two concepts of cache and buffer. We have studied the following:
** Cache is a high-speed cache used for caching between CPU and memory;
** Buffer is an I/O cache for caching memory and hard disks *
[Original article link] (http://blog.chinaunix.net/uid-24020646-id-2939696.html)
In linuxFrequently Accessing FilesThe physical memory will soon be used up. When the program ends, the memory will not be released normally, but will always be used as caching. it seems that many people are asking this question, but they have not seen any good solutions. let me talk about this.
Let's talk about the free command first.
[Root @ server ~] # Free-m total used free shared buffers cachedMem: 249 163 86 0 10 94-/+ buffers/cache: 58 191 Swap: 511 0 511 of which: total memory used free idle memory shared by multiple processes buffers: Buffer Cachecached: Page Cache-buffers/cache memory: used-buffers-cached + buffers/cache memory: free + buffers + cached ** available memory = free memory + buffers + cached **
With this foundation, we can know that I now used is 163 MB, free is 86, buffer and cached are respectively 10, 94
Let's take a look, if IHow does the memory change when copying files?.
[root@server ~]# cp -r /etc ~/test/[root@server ~]# free -m total used free shared buffers cachedMem: 249 244 4 0 8 174-/+ buffers/cache: 62 187Swap: 511 0 511
After the command execution is complete, used is 244 MB, free is 4 MB, buffers is 8 MB, and cached is 174 MB,This is to improve File Read efficiency.
To improve disk access efficiency,Linux has made some careful design, in addition to the dentry (directory item http://blog.sina.com.cn/s/blog_6fe0d70d0101e36f.html) Cache (for VFS, accelerate the conversion of file path name to inode), also adopted two main Cache method: buffer Cache and Page Cache. The former is used to read and write disk blocks, and the latter is used to read and write inode files. These caches effectively shorten the time for I/O system calls (such as read, write, and getdents ."
So some people have said that linux will automatically release the memory used for a while. Let's try again with free to see if there is any release>?
[root@server test]# free -m total used free shared buffers cachedMem: 249 244 5 0 8 174-/+ buffers/cache: 61 188Swap: 511 0 511
MS has not changed. Can IManually release the memoryWhat about ??? The answer is yes!
/ProcIs a virtual file system. We can use the read/write operations on it as a means of communication with the kernel object. in other words, you can modify the file in/proc to adjust the current kernel behavior. then we can adjust/Proc/sys/vm/drop_cachesTo release the memory. The operation is as follows:
[root@server test]# cat /proc/sys/vm/drop_caches0
First, the value of/proc/sys/vm/drop_caches. The default value is 0.
[root@server test]# sync
Run the sync command manually (description:The sync command runs the sync subroutine. If you must stop the system, run the sync command to ensure the integrity of the file system. The sync command writes all unwritten system buffers to the disk, including modified I-nodes, delayed block I/O, and read/write ing files.)
[root@server test]# echo 3 > /proc/sys/vm/drop_caches[root@server test]# cat /proc/sys/vm/drop_caches3
Set/proc/sys/vm/drop_caches to 3
[root@server test]# free -m total used free shared buffers cachedMem: 249 66 182 0 0 11-/+ buffers/cache: 55 194Swap:511 0 511
Run the free command again and find that the current used is 66 MB, free is 182 MB, buffers is 0 MB, and cached is 11 MB.The buffer and cache are effectively released.
The usage of/proc/sys/vm/drop_caches is described below
/proc/sys/vm/drop_caches (since Linux 2.6.16) Writing to this file causes the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache, use echo 1 >/proc/sys/vm/drop_caches; to free dentries and inodes, use echo 2 >/proc/sys/vm/drop_caches; to free pagecache, dentries and inodes, use echo 3 > /proc/sys/vm/drop_caches. Because this is a non-destructive operation and dirty objects are not freeable, the user should run sync(8) first.
Difference between buffer and cache
A buffer is something that has yet to be "written" to disk.
A cache is something that has been "read" from the disk and stored for later use.
For more details, see Difference Between Buffer and Cache.
For Shared memory ),It is mainly used to share data between different processes in UNIX environments. It is a method for inter-process communication. Generally, applications do not apply to use shared memory.And I have not verified the effect of shared memory on the above equation. If you are interested, see: What is Shared Memory?
Differences between cache and buffer:
Cache: high-speed Cache is a memory with a small capacity but high speed located between the CPU and the main memory. Because the CPU speed is much higher than the master memory, it takes a certain period of time for the CPU to directly access data from the memory. The Cache stores part of the data that has just been used by the CPU or is used cyclically, when the CPU uses this part of data again, it can be called directly from the Cache, which reduces the CPU wait time and improves the system efficiency. Cache is divided into Level 1 Cache (L1 Cache) and level 2 Cache (L2 Cache). L1 Cache is integrated into the CPU, and L2 Cache is usually soldered to the motherboard in the early stages, it is also integrated into the CPU. The common capacity is 256KB or 512KB L2 Cache.
Buffer: a Buffer used to transmit data between devices with Different Storage speeds or between devices with different priorities. Through the buffer zone, mutual waits between processes can be reduced, so that when reading data from a slow device, the operation process of the fast device will not interrupt.
Buffer and cache in Free: (both occupy memory):
Buffer: Memory Used as the buffer cache, which isBlock device read/write buffer
Cache: memory used as page cache, File system cache
If the cache value is large, the number of files in the cache is large. If files frequently accessed can be cached, the disk read IO bi will be very small.
========================================================== ========================================================== ====================
The cache was originally used for cpu cache, mainly because of the cpu and memory. Because the cpu is fast, memory cannot keep up with it, and some values are used frequently, so it is put into the cache for the purpose of reuse, and the level-1/level-2 Physical cache is fast;
Buffer is mainly used for disk and memory, mainly to protect the hard disk or reduce the number of Network transfers (memory data performance dataSet ). of course, it can also increase the speed (it will not be immediately written to the hard disk or the data that is directly read from the hard disk is immediately displayed), repeated use, the main purpose at first is to protect disk. The cache of asp.net includes outputcahe and data cache, Which is used repeatedly to improve the speed. outputcache mainly stores the pages after Reader. Generally, the same HTML is used multiple times. We recommend that you do not use varybyparam, do not store multiple versions,
Data cache, such as dataSet, able, etc @ page buffer = "true". buffer is used to enable the buffer to be full and then read or written. (This is also true for file output in c, the main purpose is to protect the hard disk), it can also improve the next access speed. on the client browse side, true is displayed at one time, either not displayed or in the middle, and false is displayed at one time, which is also displayed in the network output. buffer = true is used by default in file access c, which is the same as asp.net, which is equivalent to Response. write (); when the buffer is full, output to reduce the number of network transmissions <% @ OutputCache Duration = "60" VaryByParam = "none" %>, is to cache the HTML generated by asp.net, and do not need to regenerate html, control within the specified time. ascx. there is also a component cache (htmlCach ). This is also true for dataSet. DataCache, cache, and buffer are both buffer zones. In translation, it is better to translate the cache into a high-speed buffer zone (because it is mainly for the next access acceleration), and it is better to translate the buffer into a buffer zone. They all serve as a buffer, but they have different purposes. They are mainly for understanding and do not need to chew on words.
Cache stores the read data. If you hit (find the required data), do not read the hard disk. If you do not hit the hard disk. The data is organized according to the read frequency, and the most frequently read content is placed in the most easily located location. The unread content is arranged until it is deleted.
Buffers is designed based on disk read/write. Distributed write operations are performed in a centralized manner to reduce disk fragments and repeated seek paths on the hard disk, thus improving system performance. In linux, a daemon regularly clears the buffer content (such as writing to a disk). You can also use the sync command to manually clear the buffer. For example:
I have an ext2 USB flash drive. I cp A 3 m MP3 flash drive, but the USB flash drive does not beat. After a while (or manually enter sync) the USB flash drive light
And it jumps. The buffer is cleared when the device is detached, so it takes several seconds to detach the device.
Modify the number on the Right of vm. swappiness in/etc/sysctl. conf to adjust the swap Usage Policy at next boot. The value range is 0 ~ 100, the larger the number, the more inclined to use swap. The default value is 60. You can try it again.
Both are data in RAM. In short, the buffer is about to be written to the disk, and the cache is written from the disk.
Read.
Buffer is allocated by various processes and used in the input queue. A simple example is as follows:
Multiple fields are read. Before all fields are fully read, the process puts the previously read fields in the buffer for storage.
.
Cache is often used in disk I/O requests. If multiple processes need to access a file, the file is
Make it a cache to facilitate next access, which provides system performance.
A buffer is something that has yet to be "written" to disk. A cache is
Something that has been "read" from the disk and stored for later use.
For more details, see Difference Between Buffer and Cache.
========================================================== ===
Supplement: Release buffers & cached
Syncecho 1>/proc/sys/vm/drop_cachesecho 2>/proc/sys/vm/drop_cachesecho 3>/proc/sys/vm/drop_cachescache release: # To free pagecache: echo 1>/proc/sys/vm/drop_caches # To free dentries and inodes: echo 2>/proc/sys/vm/drop_caches # To free pagecache, dentries and inodes: echo 3>/proc/sys/vm/drop_caches
Note: It is best to sync before release to prevent data loss.
Because of the LINUX kernel mechanism, you generally do not need to release the used cache. The cached content can increase the file and read/write speed.