Cache & Buffers in the Linux top command

Source: Internet
Author: User

When using top to view system resources for a specific process today, the two concepts of cache and buffer are not very clear, and a look at the following:

**cache is a cache that is used for buffering between CPU and memory;
**buffer is an I/O cache for memory and hard disk buffering *

[原文链接](http://blog.chinaunix.net/uid-24020646-id-2939696.html)

When you frequently access files under Linux, the physical memory is quickly exhausted, and when the program is finished, the memory is not released normally, but has been used as a caching. This problem, it seems that a lot of people are asking, but did not see any good solution. Then I'll talk about it. .
First, the free command.

[Root@server~]# free-mTotal used free shared buffers CachedMem:   249   163    the       0       Ten         94-/+ Buffers/cache:    -        191Swap:  511    0    511which:Total memory used the amount of memory that is already in use free memory total number of memory shared by multiple processes buffers:Buffer CacheCachedPage Cache-buffers/cache Number of memory: Used-Number of Buffers-cached+buffers/cache memory: Free+ buffers + cached** available Memory=free memory+buffers+cached**

With this foundation, you can be informed that I now used for 163mb,free for 86,buffer and cached respectively for 10,94
So let's take a look at what happens to memory if I perform a copy of the file .

[root @server  ~]# cp-r/etc ~/test/ [root @server  ~] # free-m  total used free shared buffers Cachedmem  :  249   244  4  0  8  174 -/+ buffers/cache  :  62  187  swap  :  511  0  511   

At the end of my command execution, used for 244mb,free for the 4mb,buffers for the 8mb,cached for the 174MB, God has been eaten by cached. Relax, this is a way to improve the efficiency of file reads.

to improve disk access efficiency, Linux has made some elaborate designs, in addition to caching Dentry (directory entry http://blog.sina.com.cn/s/blog_6fe0d70d0101e36f.html) (for VFS, accelerating the conversion of file path names to Inode), Two main cache modes are also taken: Buffer cache and Page cache. The former is for the disk block read and write, the latter for the file inode read and write. These caches effectively shorten the time for I/O system calls (such as read,write,getdents). ”

Then someone said that for a while, Linux will automatically release the memory used, we use free to try again to see if there is a release;

[root @server  Test]# free-m  total used free shared buffers Cachedm EM  :  249  244  5  0  8  174 -/+ buffers/cache  :  61  188  swap  :  511  0  511   

There is no change in MS, so can I manually release these memory ??? The answer is YES!
/proc is a virtual file system, which can be used as a means of communicating with the kernel entity through its read and write operations. That is, you can make adjustments to the current kernel behavior by modifying the files in the/proc. Then we can adjust /proc/sys/vm/drop_caches to free memory. The operation is as follows:

[root@server test]# cat /proc/sys/vm/drop_caches0

First, the value of/proc/sys/vm/drop_caches, which defaults to 0

[root@server test]# sync

Perform the sync command manually (description:sync command runs the Sync subroutine. If you must stop the system, run the Sync command to ensure the integrity of the file system. The Sync command writes all the non-writable system buffers to disk, including modified I-node, deferred block I/O, and read-write mapping files .

[root@server test]# echo 3 > /proc/sys/vm/drop_caches[root@server test]# cat /proc/sys/vm/drop_caches3

Set the/proc/sys/vm/drop_caches value to 3

[root @server  Test]# free-m  total used free shared buffers Cached Mem  :  249  66
      182  0  0  11 -/+ buffers/cache  :  55  194  swap  :  511  0  511   

Run the free command again and find that the current used is 66mb,free to 182mb,buffers for 0mb,cached to 11MB. then the buffer and cache are released effectively.

The usage of/proc/sys/vm/drop_caches is explained below.

/proc/sys/vm/drop_caches (sinceLinux2.6. the) Writing toThisfileCauses theKernel toDrop clean caches, dentries andInodes fromMemory, causing thatMemory toBecome free. To free Pagecache, use echo1>/proc/sys/vm/drop_caches; toFree Dentries andInodes, use Echo2>/proc/sys/vm/drop_caches; toFree Pagecache, dentries andInodes, use Echo3>/proc/sys/vm/drop_caches. Because this isA non-destructive operation andDirty Objects is notFreeable, theUser shouldRunSync8) First.

  the difference between buffer and cache
   A buffer is something that have yet to being "written" to disk.
   A cache is something that have been "read" from the disk and stored for later use.

For a more detailed explanation reference: difference between Buffer and Cache
For shared memory, which is primarily used to share data between different processes in a UNIX environment, is a method of interprocess communication, and the general application does not request the use of shared memory , nor does the author verify the effect of shared memory on the above equation. If you are interested, please refer to: What is Shared Memory?

  the difference between cache and buffer:
Cache: Caching is a small but high-speed memory that sits between the CPU and the main memory. Because the CPU speed is much higher than the main memory, the CPU accesses the data directly from the memory to wait for a certain period of time, the cache holds the CPU just used or recycled part of the data, when the CPU re-use the part of the data can be directly called from the cache, which reduces the CPU waiting time, Improve the efficiency of the system. The cache is also divided into one-level cache (L1 cache) and level two cache (L2 cache), L1 cache is integrated within the CPU, L2 cache is usually soldered to the motherboard, and is now integrated into the CPU, with a common capacity of 256KB or 512KB L2 Cache.
Buffer: An area where data is transferred between devices that are not synchronized or that have different priority levels. Through buffers, you can reduce the number of waits between processes, so that when you read data from a slow device, the operating process of a fast device is uninterrupted.

  buffer and cache in free: (They are all memory-intensive):
Buffer: Memory as buffer cache, which is the read and write buffer of the block device
Cache: As the page cache memory , the file system cache
If the cache has a large value, it indicates that the cache has a high number of files. If the files that are frequently accessed are available to the cache, the disk's read Io bi is very small.

==============================================================================================
Cache was originally used for CPU cache, the main reason is the CPU and memory, because the CPU is fast, memory is not up, and some values use many times, so put into the cache, the main purpose is to reuse, and the first level \ Two level physical cache speed fast;

The

buffer is mainly used for disk and memory, mainly to protect the hard disk or reduce the number of network transmissions (the data performance dataset). Of course, you can also increase the speed (not immediately written to the hard disk or directly from the hard disk read the data immediately displayed), the primary purpose of the original is to protect disk. The cache has Outputcahe and data cache, the main purpose is to re-use, improve speed, outputcache mainly store reader after the page, generally use the same HTML multiple times, it is recommended not to VaryByParam, Do not save multiple version,
data cache, such as DataSet, dataTable, etc. @page buffer= "true", use buffer, let buffer full and then display read or write, (c in the case of file output, The main purpose is to protect the hard drive), or to improve the next access speed. In the client browse side performance is: True is a one-time display, either do not display, middle, etc., false is displayed some at once, this in the network output is also the same performance. For file access C, the default is buffer = True, which is the same as ASP. Response.Write (), which is output when buffer is full, to reduce the number of transfers to the network <%@ OutputCache duration= "varybyparam=" "none"%> , which is to cache the HTML generated by ASP, without having to regenerate HTML for a specified period of time, Control.ascx. There is also a component cache ( Htmlcach). The same is true for datasets. Datacache,cache and buffer are buffers, and in translation, the cache is better translated into a buffer area (because it is mainly for the next access acceleration), and buffers are better to translate into a cache. Are the role of buffering, but the purpose is a little different, mainly understanding, do not need to be too literal.

Caching (cache) is to save the read data, re-read if hit (find the required data) do not read the hard disk, if not hit the hard drive. The data will be organized according to the frequency of reading, the most frequently read content in the most easily found in the location, the content is no longer read to the back row, until removed from it.
Buffer (buffers) is based on the disk read-write design, the decentralized write operations centralized, reduce disk fragmentation and hard disk repeatedly seek, thereby improving system performance. Linux has a daemon that periodically empties buffered content (that is, writes like a disk) or manually empties the buffer via the Sync command. Let's take an example:

I have a ext2 u disk here, I go to the inside CP a 3M MP3, but the U disk's light does not beat, after a while (or manually input sync) u disk Light
It's beating up. The buffering is emptied when the device is uninstalled, so there are times when uninstalling a device takes a few seconds.
Modifying the number to the right of vm.swappiness in/etc/sysctl.conf can adjust the swap usage policy at the next boot. The number range is 0~100, and the larger the number, the more likely it is to use swap. The default is 60, you can change it to try.

both are data in RAM. In short, the buffer is about to be written to disk, and the cache is being taken from disk
Read it out.
Buffer is allocated by various processes and is used in areas such as input queues, a simple example of a process requirement
Multiple fields are read in, and before all fields are read into full, the process saves the previously read-in fields in buffer

The cache is often used on disk I/O requests, and if multiple processes have access to a file, the file is
The cache is made available for next access, which provides system performance.
A buffer is something that have yet to being "written" to disk. A Cache is
Something that have been "read" from the disk and stored for later use.
 
For a more detailed explanation reference: difference between Buffer and Cache
 
 

  
Syncecho1>/proc/sys/vm/drop_cachesecho2>/proc/sys/vm/drop_cachesecho3>/proc/sys/vm/drop_cachescache Release:# #To Free Pagecache:Echo1>/proc/sys/vm/drop_caches# #To Free dentries and inodes:Echo2>/proc/sys/vm/drop_caches# #To Free Pagecache, dentries and inodes:Echo3>/proc/sys/vm/drop_caches instructions, it is best to sync before releasing, to prevent data loss. BecauseLINUXKernel mechanism, in general there is no need to deliberately release the cache that is already in use. These cache contents can increase the file and read and write speed.

Cache & Buffers in the Linux top command

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.