Linux system caching mechanism

Source: Internet
Author: User

1. Caching mechanism

In order to improve the performance of the file system, the kernel uses a portion of physical memory to allocate buffers for caching system operations and data files, and when the kernel receives read-write requests, the kernel first goes to the buffer zone to find out if the requested data is available, directly returns it, and if not, operates the disk directly through the driver.

Advantages of caching mechanisms: reduce the number of system calls and reduce the frequency of CPU context switching and disk access.

CPU Context Switch: The CPU gives each process a certain amount of service time, and when the time slice is exhausted, the kernel reclaims the processor from the running process, saves the current running state of the process, and then loads the next task, a process called context switch. is essentially a process switch that terminates the running process and the process to be run.

2. Check the buffer and memory usage

[Email protected] ~]# free-m

Total used free shared buffers Cached

mem:7866 7725 141 19 74 6897

-/+ buffers/cache:752 7113

swap:16382 32 16350

Can see a total memory of 8G, has used 7725M, the remaining 141M, a lot of people are so see, this does not serve as the actual usage. Because of the caching mechanism, how to calculate the specific?

Free memory =free (141) +buffers (+cached) (6897)

Used Memory =total (7866)-free memory

This calculates that the free memory is 7112M, has used memory 754M, this is the real usage rate, also can refer to-/+ Buffers/cache This line of information is also the correct memory usage rate.

3, the visible buffer is divided into buffers and cached, what difference do they have?

The kernel allocates the buffer size to ensure that the system can normally use physical memory and data volume reading and writing. Buffers is used to cache metadata and pages, which can be understood as a system cache, for example, vi to open a file. Cached is used to cache files, which can be understood as data block caching, for example, DD If=/dev/zero of=/tmp/test count=1 bs=1g Test Write a file, it will be cached in the buffer, the next time you execute this test command, The write speed will be noticeably faster.

4, casually say swap what to do with it?

Swap means the swap partition, which is usually what we call virtual memory, which is a partition that is partitioned from the hard disk. When the physical memory is not enough, the kernel releases some long-unused programs in the cache (Buffers/cache) and then temporarily puts them into swap, which means that swap is used if the physical memory and buffer memory are not enough.

5, how to release buffer memory it?

5.1 Directly change kernel operating parameters

#释放pagecache

Echo 1 >/proc/sys/vm/drop_caches

#释放dentries和inodes

Echo 2 >/proc/sys/vm/drop_caches

#释放pagecache, Dentries and Inodes

Echo 3 >/proc/sys/vm/drop_caches

5.2 You can also reset kernel run parameters using Sysctl

Sysctl-w vm.drop_caches=3

Note: Both of these methods are temporary in effect, the permanent effect needs to be added in the sysctl.conf file, generally written script timed cleanup.

6. In addition, view the CPU and I/O performance methods

#查看CPU性能

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/6D/D9/wKiom1VtPFmCEtY9AADbdiZbn9A400.jpg "title=" 2.png " alt= "Wkiom1vtpfmcety9aadbdizbn9a400.jpg"/>

#参数-P is the number of CPUs displayed, all for all, or only the first few CPUs650) this.width=650; "Src=" http://s3.51cto.com/wyfs02/M00/6D/D5/ Wkiol1vtppayb7weaalqhx41buc367.jpg "title=" 3.png "alt=" Wkiol1vtppayb7weaalqhx41buc367.jpg "/>

#查看I/O Performance

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/6D/D9/wKiom1VtPSXTsI4zAAMkfVf2r-I743.jpg "title=" 1.png " alt= "Wkiom1vtpsxtsi4zaamkfvf2r-i743.jpg"/>

#参数-M is displayed in m units, the default K

#%util: When 100% is reached, I/O is busy.

#await: The request waits in the queue for a time that directly affects the read time.

I/O limit: IOPS (r/s+w/s), usually around 1200. (IOPS, read/write (I/O) operations per second)

I/O Bandwidth: The theoretical value of SAS hard disk in sequential read/write mode is around 300m/s, and SSD drive theory value is around 600m/s.


This article is from the "Penguin" blog, please be sure to keep this source http://lizhenliang.blog.51cto.com/7876557/1657448

Linux system caching mechanism

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.