Linux Buffer/cache Similarities and differences

Source: Internet
Author: User

Tag:%s   lin   str   ges    caching mechanism      Printing     is     file    name   

Buffers and Cached
1). Similarities and differences
In the Linux operating system, when the application needs to read the data in the file, the operating system allocates some memory, reads the data from the disk into the memory, and then distributes the data to the application, and when the data needs to be written to the file, the operating system allocates the memory to receive the user data first. The data is then written from memory to disk. However, if there is a large amount of data that needs to be read from disk to memory or written to disk by memory, the system's read and write performance becomes very low, since it is a time-consuming and resource-intensive process to read data from disk or write data to disk, in which case Linux introduces buffers and Cached mechanism.

Buffers and cached are memory operations that are used to save files and file attribute information that have been opened by the system, so that when the operating system needs to read some files, it will first look in the buffers and cached memory areas, and if found, read them directly to the application. If you do not find the data needed to read from disk, this is the operating system caching mechanism, through the cache, greatly improve the performance of the operating system. But the content of buffers and cached buffer is different.

Buffers is used to buffer the block device, it only records the file system metadata (metadata) and tracking in-flight pages, and cached is used to buffer the file. More commonly said:buffers is mainly used to store content in the directory, file attributes and permissions and so on. and cached is used directly to memorize the files and programs we have opened.

In order to verify our conclusion is correct, you can open a very large file by VI, look at the change of cached, and then again VI this file, feel how the speed of two times to open the similarities and differences, is not the second opening speed significantly faster than the first time? Here is a small script for printing first and second opening of a large file (Catalina.logaa about 2G) time-consuming and cached/buffers changes:

 #!/bin/bash Sync sync echo 3 >/proc/sys/vm/drop_caches echo-e "----------------------cache released, memory usage (KB):---------  -------------"Free cached1= ' free |grep mem:|awk ' {print $7} ' buffers1= ' free |grep mem:|awk ' {print $6} ' date1= ' date + "%y%m%d%h%m%s" ' Cat Catalina.logaa >1 date2= ' date + "%y%m%d%h%m%s" ' Echo-e "----------------------memory usage after first reading large files (   KB):----------------------"free cached2= ' free |grep mem:|awk ' {print $7} ' buffers2= ' free |grep mem:|awk ' {print $6} ' #echo $date 1 #echo $date 2 interval_1= ' expr ${date2}-${date1} ' cached_increment1= ' expr ${cached2}-${cached1} ' buf fers_increment1= ' expr ${buffers2}-${buffers1} ' date3= ' date + '%y%m%d%h%m%s ' ' Cat Catalina.logaa >1 date4= ' date + ' %y%m%d%h%m%s "' Echo-e"----------------------Memory usage (KB) After reading large files again:----------------------"free cached3= ' free |grep  Mem:|awk ' {print $7} ' buffers3= ' free |grep mem:|awk ' {print $6} ' #echo $date 3 #echo $date 4 interval_2= ' expr ${date4} -${date3} ' Cached_increment2= ' expr ${cached3}-${cached2} ' buffers_increment2= ' expr ${buffers3}-${buffers2} ' echo-e '----------------------statistics The summary data is as follows:----------------------"ECHO-E" first read large file, cached increment: ${cached_increment1}, in KB "ECHO-E" first read large file, buffers increment: ${ BUFFERS_INCREMENT1}, Unit: KB "ECHO-E" first read large file, time: ${interval_1}, Unit: s \ n "echo-e" read large file again, cached increment: ${cached_increment2},   Unit: KB "ECHO-E" read large file again, buffers increment: ${buffers_increment2}, in KB "ECHO-E" read large file again, time: ${interval_2}, Unit: S "

The result of the execution is as follows (there is a time interval between the free command that is printed with the parameter assignment, which may be slightly different from the calculation):


Then execute the following command: Find/*-name *.conf, see if the value of buffers changes, and then repeat the Find command to see how the two times the display speed is different. If you need to install the software when using BC to calculate floating-point data, my system is centos7.0, kernel 4.3.3 version, install bc-1.06.95-13.el7.x86_64 service):

#!/bin/bash Sync sync echo 3 >/proc/sys/vm/drop_caches echo-e "----------------------cache released, memory usage (KB):----------- -----------"Free cached1= ' free |grep mem:|awk ' {print $7} ' buffers1= ' free |grep mem:|awk ' {print $6} ' date1= ' date + %s.%n ' Find */-name *.conf >2 date2= ' date +%s.%n ' echo-e '----------------------first query, Memory usage (KB):---------------- ------"Free cached2= ' free |grep mem:|awk ' {print $7} ' buffers2= ' free |grep mem:|awk ' {print $6} ' #echo $date 1 #ech o $date 2 interval_1= ' echo ' scale=3; ${DATE2}-${date1} "| BC ' cached_increment1= ' expr ${cached2}-${cached1} ' buffers_increment1= ' expr ${buffers2}-${buffers1} ' date3= ' date +%s.%n ' Find/-name *.conf >2 date4= ' date +%s.%n ' echo-e '----------------------query again, Memory usage (KB):--------------- -------"Free cached3= ' free |grep mem:|awk ' {print $7} ' buffers3= ' free |grep mem:|awk ' {print $6} ' #echo $date 3 #ec Ho $date 4 interval_2= ' echo ' scale=3; ${DATE4}-${date3} "| BC ' cached_increment2= ' EXpr ${cached3}-${cached2} ' buffers_increment2= ' expr ${buffers3}-${buffers2} ' echo-e '----------------------statistical summary data is as follows :----------------------"ECHO-E" first query, Cached increment: ${cached_increment1}, Unit: KB "ECHO-E" first query, buffers increment: ${buffers_ INCREMENT1}, Unit: KB "ECHO-E" first query, Time: ${interval_1}, Unit: s \ n "echo-e" query again, cached increment: ${cached_increment2}, in KB "ECHO-E"   Query again, buffers increment: ${buffers_increment2}, Unit: KB "ECHO-E" Again, Time: ${interval_2}, Unit: S "

The result is as follows (the last one should be 0.470702440, the 0 is removed when using BC):

2. Memory Release
/proc is a virtual file system in Linux system, which can be used as a means to communicate with kernel entities through its read and write operations. This means that you can make adjustments to the current kernel behavior by modifying the files in the/proc. Then we can release the memory by adjusting the/proc/sys/vm/drop_caches. As for Drop_caches, the official argument is:

Writing to this would cause Thekernel to drop clean caches, dentries and      inodes from memory, causing Thatmemory to Beco Me free.      To free Pagecache:               echo 1 >/proc/sys/vm/drop_caches to free      dentries and inodes:               echo 2 >/proc/sys/vm/ Drop_caches      to free Pagecache, dentries andinodes:               echo 3 >/proc/sys/vm/drop_caches as this is      a non-dest Ructiveoperation and Dirty objects is not freeable, the      user should run ' sync ' first.      Http://www.kernel.org/doc/Documentation/sysctl/vm.txt  

# cat/proc/sys/vm/drop_caches
0
By default, 0,1 means empty page cache, 2 for emptying inode and directory tree cache, 3 emptying all caches

[[email protected] ~]# sync[[email protected] ~]# free-m                     total       used        free     shared    buffers     Cachedmem:              499         323        175             0         188-/+ buffers/cache:         416Swap:            2047            0       3 >/proc/sys/vm/drop_caches [[email protected] ~]# free-m     //Discovery cache significantly reduced total       used free     shared    buffers     Cachedmem:             499         415            0            1           17-/+ buffers/cache:         434Swap:           2047            0        2047

Linux Buffer/cache Similarities and differences

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.