The difference between memory buffer and cache in Linux __linux

Source: Internet
Author: User
The difference between memory buffer and cache in Linux 2011-10-04 14:40:18

This article turns from: http://blog.chinaunix.net/uid-24020646-id-2939696.html

Careful friends will notice that when you frequently access files under Linux, physical memory will soon be used up, and when the program is finished, memory will not be released normally, but as a caching. This question seems to be a lot of people asking, But I don't see any good solutions. Then let me talk about this.

Let's say free command.

[root@server ~]# free-m
                     total       used        free     shared    buffers     cached
Mem:            249        163          86          0          10             
-/+ buffers/cache:         58         191
swap:          511           0          511

which

Total Memory

used the number of memory already in use

Free amount of memory

Total memory shared by multiple processes

Buffers buffer cache and cached Page cache disk size

Number of-buffers/cache Memory: used-buffers-cached

+buffers/cache Memory: Free + buffers + Cached

Available Memory=free memory+buffers+cached

With this foundation, it can be learned that I now used for 163mb,free for 86,buffer and cached respectively for 10,94

So let's see what happens to memory if I execute the copy file.

[Root@server ~]# cp-r/etc ~/test/
[Root@server ~]# Free-m
Total used free shared buffers Cached
mem:249 244 4 0 8 174
-/+ buffers/cache:62 187
swap:511 0 511

At the end of my command, used for 244mb,free for the 4mb,buffers 8mb,cached for 174MB, the days are cached eaten. Don't be nervous, this is to improve the efficiency of the file reading practice.

In order to improve disk access efficiency, Linux has done a number of careful design, in addition to the Dentry cache (for VFS, speed file path name to the Inode conversion), but also took two main Cache: Buffer cache and Page cache. The former is read and write to disk block, the latter is read and write to the file inode. These cache effectively shorten the time of the I/O system call (such as read,write,getdents). "

So someone said that some time, Linux will automatically release the memory used, we use free to try again to see if there is release?

[root@server test]# free-m
                     total       used        free     shared    buffers     cached
Mem:            249        244           5          0           8             174
-/+ buffers/cache:         61         188
swap:          511           0          511

There is no change in MS, so can I manually release the memory??? The answer is OK!

/proc is a virtual file system that we can use to communicate with kernel entities through its read and write operations. That is, you can make adjustments to the current kernel behavior by modifying the files in/proc. Then we can adjust the/PROC/SYS/VM /drop_caches to free memory. The operation is as follows:

[Root@server test]# Cat/proc/sys/vm/drop_caches
0
First, the/proc/sys/vm/drop_caches value defaults to 0.

[Root@server test]# Sync

Manually perform the sync command (description: The sync command runs the Sync subroutine.) If the system must be stopped, run the sync command to ensure the integrity of the file system. The Sync command writes all of the unused system buffers to disk, including modified I-node, deferred block I/O, and read-write mapping files.

[Root@server test]# echo 3 >/proc/sys/vm/drop_caches
[Root@server test]# Cat/proc/sys/vm/drop_caches
3

Set the/proc/sys/vm/drop_caches value to 3

[root@server test]# free-m
                    total       used        free     shared    buffers     Cached
Mem:            249         66         182          0           0           One
-/+ buffers/cache:         55         194
swap:          511           0          511

Then run the free command and find that the used is now 66mb,free for 182mb,buffers 0mb,cached 11MB. So effectively released buffer and cache.

The usage of/proc/sys/vm/drop_caches is described below

/proc/sys/vm/drop_caches (since Linux 2.6.16)
Writing to this file causes the kernel to drop clean caches,
Dentries and inodes from memory, causing which memory to become
Free.

To-free Pagecache, use echo 1 >/proc/sys/vm/drop_caches; To
Free dentries and inodes, use echo 2 >/proc/sys/vm/drop_caches;
To free Pagecache, dentries and inodes, use echo 3 >
/proc/sys/vm/drop_caches.

Because this is a non-destructive operation and dirty objects
Are not freeable, the user should run Sync (8).

=========================================================================

The difference between buffer and cache

A buffer is something which has yet to be "written" to disk. A Cache is something this has been "read" from the disk and stored for later use.

More detailed explanation reference: Difference Between Buffer and Cache

  for shared memory, which is primarily used to share data among different processes in a UNIX environment, is a way to communicate between processes, and the general application does not request shared memory , nor does the author verify the effect of shared memory on the above equation. If you are interested, please refer to: What is Shared Memory?

The difference between cache and buffer:

  Cache : Caching is a small but high speed memory located between the CPU and the main memory. because the CPU speed is much higher than the main memory, the CPU directly from the memory to access data to wait for a certain period of time, cache storage CPU just used or recycled part of the data, when the CPU again use this part of the data can be directly from the cache call, This reduces the CPU wait time and improves the efficiency of the system. Cache is divided into a cache (L1 cache) and level two cache (L2 cache), L1 cache integrated in the CPU, L2 early cache is generally welded on the motherboard, are now integrated in the CPU, the common capacity of 256KB or 512KB L2 Cache.

  Buffer: A zone that is used to transmit data between devices that are not synchronized or that have different priorities. The buffer allows for less mutual waiting between processes, so that when data is read from a slow device, the operating process of the fast device does not break.

Buffer and cache in free: (they all occupy memory):

Buffer: As buffer cache memory, is the block device read and write buffers

Cache: As page cache memory, file system cache if the value of the cache is very large, indicating that the number of files cache live a lot. If frequently accessed files can be cache, the disk's read IO bi will be very small.
============================================================================================== cache is cached, For buffering between the CPU and memory;
Buffer is I/O cache, for memory and hard disk buffering

Cache originally used for CPU cache, the main reason is CPU and memory, due to CPU fast, memory keep up, and some values use more times, so put
Cache, the main purpose is to reuse, and level two physical cache speed fast,
Buffer is mainly used for disk and memory, mainly to protect the hard disk or reduce the number of network transmission (memory data performance DataSet). Of course, you can also improve speed (not immediately write to the hard disk or read directly from the hard drive data immediately displayed), re-use, the original primary purpose is to protect disk,
asp.net cache has Outputcahe and data cache, the main purpose is to reuse, improve speed, outputcache mainly store the reader after the page, is generally used the same HTML many times, the proposal does not VaryByParam, Do not save multiple version,
Data cache, such as DataSet, dataTable, etc.
@page buffer= "True", use buffer, let the buffer full and then display read out or write, (c file output is the same, the main purpose is to protect the hard disk), also can improve the next visit speed. On the client browse side, True is a one-time display, or no display, intermediate, and so on, and false is displayed at a time,
This is also true for network output.
The default in file access c is buffer = True, which is the same as ASP.net,
Equivalent to Response.Write (), when the buffer is full output to reduce the number of network transmission
<%@ OutputCache duration= varybyparam= "None"%&gt, which caches the asp.net generated HTML and does not need to regenerate HTML for a specified period of time. Control.ascx. Also has component caching (Htmlcach). The same is true for datasets. Datacache,
Cache and buffer are buffer, in translation, cache translation into a high buffer zone better (because the main is for the next visit to accelerate), buffer translation into buffers better. is a buffer of the role, but the purpose is a little different, mainly understanding, do not need too much wording.

The difference between cache and buffer
1, buffer is buffers.
2, Cache is a caching, the library cache; data dictionary cache; Database buffer Cache
Buffer cache buffer caching, which is used to cache data read from the hard disk, reducing disk I/O.
3, buffer has shared SQL zone and pl/sql area, database buffer cache has independent Subcache
4, pool is a shared pool used to store recently executed statements, etc.
5, Cache:
A cache is a smaller, higher-speed component that are used to speed up the
Access to commonly used data stored in a lower-speed, higher-capacity
Component.
Database buffer cache:
The database buffer cache is the portion of the SGA, that holds copies of data
Blocks
Read from data files. All user processes concurrently (concurrently, concurrently) connected
To the instance share access to the database buffer cache.
Buffer cache is read in the block as the unit written.

Cache is to save the read data, read again if hit (find the required data) do not read the hard drive, if not hit the hard drive. The data will be organized according to the frequency of reading, the most frequently read content in the most easily found location, the content of no longer read to the back row, until removed from.
Buffering (buffers) is based on the disk's read and write design, the scattered write operations concentrated, reduce disk fragmentation and hard disk repeatedly seek, thereby improving system performance. Linux has a daemon that periodically empties the buffered content (that is, writes like a disk), or it can manually empty the buffer through the Sync command. Let's take an example:

I have a ext2 u disk here, I go to the inside CP a 3M MP3, but the U disk's lamp did not beat, after a while (or manually enter sync) u disk lights
It's beating. Buffering is emptied when the device is unloaded, so there are times when uninstalling a device takes a few seconds.
Modify the vm.swappiness on the right side of the/etc/sysctl.conf to adjust the swap policy at the next boot
Slightly. The number range is 0~100, and the larger the number, the more likely it is to use swap. The default is 60, you can change to try.
-----------------------------------------
Both are data in RAM. In simple terms, the buffer is about to be written to disk, and the cache is removed from the disk
Read it.
Buffer is allocated by various processes, used in such areas as input queues, a simple example of a process that requires
More than one field is read in, and before all fields are read into full, the process saves the previously read fields in buffer

Cache is often used on disk I/O requests, and if multiple processes have access to a file, the file is
The cache is made to facilitate the next visit, which provides system performance.
A buffer is something which has yet to be "written" to disk. A Cache is
Something that has been "read" to the disk and stored for later use.
More detailed explanation reference: Difference Between Buffer and Cache
For shared memory, which is primarily used to share data between different processes in a UNIX environment,
is a method of interprocess communication, the general application will not apply for shared memory, the author did not go to verify the total
The effect of memory on the above equation. If you are interested, please refer to: What is Shared Memory?
The difference between cache and buffer:
Cache: Caching is a small but high speed memory located between the CPU and the main memory. Because
CPU speed is much higher than the main memory, the CPU directly from the memory access to data to wait for a certain period of time, cache save
The CPU has just used or recycled part of the data, when the CPU again use this part of the data can be directly from the cache to tune
, which reduces the CPU wait time and improves the efficiency of the system. Cache is divided into one level cache (L1 cache)
and level two cache (L2 cache), L1 cache integrated in the CPU, L2 cache early is generally welded on the motherboard, now
are also integrated within the CPU, the common capacity of 256KB or 512KB L2 Cache.
Buffer: Buffers that transmit data between devices that are not synchronized with the storage speed or different priorities
The area. Buffers allow fewer processes to wait for each other, allowing data to be read from slow devices
, the operating process of the fast device is not interrupted.
Buffer and cache in free: (they all occupy memory):
Buffer: As buffer cache memory, is the block device read and write buffers
Cache: As page cache memory, file system cache
If the cache value is very large, indicating that the cache to live a large number of files. If a file that is accessed frequently can be
Cache, then the read IO bi of the disk will be very small.
===========================================

# Sync
# echo 1 >/proc/sys/vm/drop_caches
Echo 2 >/proc/sys/vm/drop_caches
Echo 3 >/proc/sys/vm/drop_caches

Cache release:
To free Pagecache:
Echo 1 >/proc/sys/vm/drop_caches
To-free dentries and inodes:
Echo 2 >/proc/sys/vm/drop_caches
To free Pagecache, dentries and Inodes:
Echo 3 >/proc/sys/vm/drop_caches

Note, the best sync before release, to prevent loss of data.

Because the Linux kernel mechanism, generally do not need to deliberately release the cache already used. These cache content can increase the file and read and write speed.

=======================================================

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.