Linux記憶體監控工具

來源:互聯網
上載者:User

本文為轉載


http://www.opensolution.org.cn/archives/502.html

一、free

 

該工具主要是顯示系統裡可用和已用的記憶體


Linux
通常按一定的演算法把常用的資料載入到系統的虛擬記憶體buffers
和cached
中,以便於使用者程式在訪問系統資源更快。而由free

查看到的buffers
是用於存放中繼資料,而cached
是用於存放真實的檔案內容。

 

由free -k
的輸出結果中可知:


系統總實體記憶體(total)


是4144656K(
約4G);

已用(Mem
行對應的used)
的實體記憶體

是3871932K(
約3.8G,
注:
這裡包含了buffers
的152460K(
約152M)
和cached
的2253060K(2.2G).)
,他包含系統的buffers
和cached
的。

-/+ buffers/cache

對應的used


是1466412K(
約1.4G),
也就是Mem
行used(3871932K)-Mem
行buffers(152460K)-

Mem
行cached(2253060K)=1466412K(
約1.4G).
所以實際上可用於分配的實體記憶體(-/+ buffers/cache
行對應的free)
是2678244K(
約2.6G).


Shared

在man
手冊裡提示應該忽略(man free:The shared memory
column should be ignored; it is obsolete.)

Mem

行對應的free


對應的274220K(
約274M).
其實這個free
是有一定限制的:
不能低於min_free_kbytes

min_free_kbytes

用於計算系統裡lowmem
zone(
實體記憶體0-896MB
之間的zone)
的值(This is used to force the Linux VM to
keep a minimum number of kilobytes free.  The VM uses this number to compute a
pages_min value for each lowmem zone in the system.  Each lowmem zone gets a
number of reserved free pages based proportionally on its
size.).

計算方式參見mm/page_alloc.c



的min_free_kbytes = sqrt(lowmem_kbytes *
16)

 

上述值是一定的公式計算

系統的lowmem
是872656KB

[root@crm_10 /root]grep LowTotal /proc/meminfo

LowTotal: 
872656

min_free_kbytes=sqrt(872656*16)
約等於 3797

二、ps,top

 

這兩個工具在記憶體監視方面有很大的相似性,所以一併說一下:

下面top
裡的VIRT

相當於ps
裡的VSZ

:
指用於這個任務的總虛擬記憶體(
虛擬記憶體包括實體記憶體和swap
交換分區),
包括所有的代碼、資料、共用庫以及已經被out
到swap
分區的資料。/* The
total amount of virtual memory used by the task.  It includes all code, data and
shared libraries plus pages that have been swapped
out.*/

 

而top
裡的RES


相當於ps

裡的RSS

:

指用於這個任務的沒被out
到swap
分區的總實體記憶體/* resident set size, the
non-swapped physical memory that a task has used */

top裡的%MEM
:


指這個任務的RES
佔總實體記憶體的比率/* Memory usage (RES) A task's
currently used share of available physical
memory.*/

三、vmstat

 

顯示的值跟用free
工具查看到的值相似。一般情況下:
只要swap
一列的si/so
數值不超過1024
即可。

Swap

       si: Amount of memory swapped in
from disk (/s).

       so: Amount of memory swapped to
disk (/s).

四:VFS
裡的meminfo
資訊:


Dirty



是指資料已寫入記憶體,但還沒同步到外存(
磁碟)
的資料量.

Slab:

為了提供核心空間以頁分配對有些調用(
只需小記憶體)
不合適的一種記憶體配置方式,
提出Pool
的概念。

Vmalloc:

為瞭解決非連續性記憶體的使用,提供的一種記憶體配置方式(
採用鏈表)

CommitLimit:

指當前可以分配給程式使用的虛擬記憶體(
只有當vm.overcommit_memory
的值設定為2
時,CommitLimit
才有意義)

 CommitLimit: Based on the overcommit ratio
('vm.overcommit_ratio'),
              this is the total amount of  memory
currently available to
              be allocated on the system. This limit
is only adhered to
              if strict overcommit accounting is enabled
(mode 2 in
              'vm.overcommit_memory').
              The
CommitLimit is calculated with the following formula:
              CommitLimit = ('vm.overcommit_ratio' * Physical RAM) +
Swap

              For example, on a system with 1G of physical RAM
and 7G
              of swap with a `vm.overcommit_ratio` of 30 it
would
              yield a CommitLimit of 7.3G.
              For more
details, see the memory overcommit documentation
              in
vm/overcommit-accounting.

 

Committed_AS:

指當前已指派給程式使用的總虛擬記憶體(
包含已指派給進程但還沒使用的記憶體)

Committed_AS: The
amount of memory presently allocated on the system.
              The
committed memory is a sum of all of the memory which
              has been
allocated by processes, even if it has not been
              "used" by them
as of yet. A process which malloc()'s
1G
              of memory, but only touches 300M of it will only show
up
              as using 300M of memory even if it has the address
space
              allocated for the entire 1G. This 1G is memory which
has
              been "committed" to by the VM and can be used at any
time
              by the allocating application. With strict
overcommit
              enabled on the system (mode 2 in
'vm.overcommit_memory'),
              allocations which would exceed the
CommitLimit (detailed
              above) will not be permitted.
This
is useful if one needs
              to guarantee that processes will not
fail due to lack of
              memory once that memory has been
successfully allocated.

 

 

HugePagesize:

在X86
架構下,
通常linux
給記憶體分頁時,預設是每頁是4KB,
而有些應用自己可以管理記憶體(
例如db,java
……),
所以可以把頁的大小分大些(
在32
位下:4K
或4M,
在PAE
模式下可以把每頁設定為2M.
在64
位下:



4K, 8K, 64K, 256K, 1M,
4M, 16M,256M
),
這樣可以增加TLB(
一個儲存線性地址和物理地址對應表的高速緩衝器)
儲存的條目,
這樣就可減少線性地址到物理地址轉換的過程.
可通過vm.hugetlb_shm_group
和vm.nr_hugepages
參數進行調整。具體可查看
/usr/share/doc/kernel-doc-`uname –r
|cut -d-
-f1`/Documentation/vm/hugetlbpage.txt(

如果你的機器上已經安裝了
kernel-doc)

 

The intent of this
file is to give a brief summary of hugetlbpage support in

the Linux kernel. 
This support is built on top of multiple page size support

that is provided by
most of modern architectures.  For example, IA-32

architecture
supports 4K and 4M (2M in PAE mode) page sizes, IA-64

architecture
supports multiple page sizes 4K, 8K, 64K, 256K, 1M, 4M,
16M,

256M.  A TLB is a
cache of virtual-to-physical translations.  Typically this

is a very scarce
resource on processor.  Operating systems try to make best

use of limited
number of TLB resources.  This optimization is more
critical

now
as bigger and bigger physical memories (several GBs) are more
readily

available.

 

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.