電腦體繫結構 -記憶體最佳化

來源:互聯網
上載者:User

標籤:

/proc/slabinfo
/proc/buddyinfo
/proc/zoneinfo
/proc/meminfo


[[email protected] /]# slabtop

 Active / Total Objects (% used)    : 347039 / 361203 (96.1%)
 Active / Total Slabs (% used)      : 24490 / 24490 (100.0%)
 Active / Total Caches (% used)     : 88 / 170 (51.8%)
 Active / Total Size (% used)       : 98059.38K / 99927.38K (98.1%)
 Minimum / Average / Maximum Object : 0.02K / 0.28K / 4096.00K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
115625 115344  99%    0.10K   3125       37     12500K buffer_head
 73880  73437  99%    0.19K   3694       20     14776K dentry
 42184  42180  99%    0.99K  10546        4     42184K ext4_inode_cache
 20827  20384  97%    0.06K    353       59      1412K size-64
 16709  13418  80%    0.05K    217       77       868K anon_vma_chain
 15792  15708  99%    0.03K    141      112       564K size-32
 11267  10323  91%    0.20K    593       19      2372K vm_area_struct
 10806  10689  98%    0.64K   1801        6      7204K proc_inode_cache
  9384   5232  55%    0.04K    102       92       408K anon_vma
  7155   7146  99%    0.07K    135       53       540K selinux_inode_security
  7070   7070 100%    0.55K   1010        7      4040K radix_tree_node
  6444   6443  99%    0.58K   1074        6      4296K inode_cache
  5778   5773  99%    0.14K    214       27       856K sysfs_dir_cache
  3816   3765  98%    0.07K     72       53       288K Acpi-Operand
  2208   2199  99%    0.04K     24       92        96K Acpi-Namespace
  1860   1830  98%    0.12K     62       30       248K size-128
  1440   1177  81%    0.19K     72       20       288K size-192
  1220    699  57%    0.19K     61       20       244K filp
   660    599  90%    1.00K    165        4       660K size-1024



[[email protected] xx]# cat /proc/meminfo |grep HugePage AnonHugePages: 2048 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0

1.vi /etc/sysctl.conf
加入
vm.nr_hugepages = 10

2.sysctl -p
[[email protected] /]#  cat /proc/meminfo |grep Huge
AnonHugePages:      2048 kB
HugePages_Total:      10
HugePages_Free:       10
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

3.應用於應用程式
[[email protected] /]# mkdir /hugepages
[[email protected] /]# mount -t  hugetlbfs  none  /hugepages

[[email protected] /]# dd if=/dev/zero of=/hugepages/a.out bs=1M count=5
Hugetable page:

Hugetlbfs support is built on top of multiple page size support that is provided by most modern
architecturesUsers can use the huge page support in Linux kernel by either using the mmap system call or
standard Sysv shared memory system calls (shmget, shmat)cat /proc/meminfo | grep HugePage
Improving TLB performance:

Kernel must usually flush TLB entries upon a context switchUse free, contiguous physical pages Automatically via the buddy allocator /proc/buddyinfoManually via hugepages (not pageable) Linux supports large sized pages through the hugepages mechanism Sometimes known as bigpages, largepages or the hugetlbfs filesystemConsequences TLB cache hit more likely Reduces PTE visit count
Tuning TLB performance

Check size of hugepages x86info -a | grep “Data TLB” dmesg cat /proc/meminfo
Enable hugepages 1.In /etc/sysctl.conf vm.nr_hugepages = n
2.Kernel parameter //操作系動起動時傳參數 hugepages=nConfigure hugetlbfs if needed by application
mmap system call requires that hugetlbfs is mounted mkdir /hugepages mount -t hugetlbfs none /hugepages shmat and shmget system calls do not require hugetlbfs

 

Trace every system call made by a program
strace -o /tmp/strace.out -p PIDgrep mmap /tmp/strace.out

Summarize system callsstrace -c -p PID orstrace -c COMMAND
strace commandOther usesInvestigate lock contentionsIdentify problems caused by improper file permissionsPinpoint IO problems
Strategies for using memory
使用記憶體最佳化

1.Reduce overhead for tiny memory objects Slab cache
cat /proc/slabinfo
2.Reduce or defer service time for slower subsystems Filesystem metadata: buffer cache (slab cache) //快取檔案中繼資料 Disk IO: page cache //快取資料 Interprocess communications: shared memory //共用記憶體 Network IO: buffer cache, arp cache, connection tracking 3.Considerations when tuning memory How should pages be reclaimed to avoid pressure? Larger writes are usually more efficient due to re-sorting

 

記憶體參數設定:
vm.min_free_kbytes:
1.因為記憶體耗近,系統會崩潰
2.因此保有空閑記憶體剩下,當進程請求記憶體配置,不足會把其他記憶體交換到SWAP中,從而便騰去足夠空間去給請求
  Tuning vm.min_free_kbytes only be necessary when an application regularly needs to allocate a large block of memory, then frees that same memory  使用方式:
It may well be the case that
the system has too little disk bandwidth,
too little CPU power, or
too little memory to handle its load

Linux 提供了這樣一個參數min_free_kbytes,用來確定系統開始回收記憶體的閥值,控制系統的空閑記憶體。值越高,核心越早開始回收記憶體,空閑記憶體越高。
http://www.cnblogs.com/itfriend/archive/2011/12/14/2287160.htmlConsequences  Reduces service time for demand paging  Memory is not available for other useage  Can cause pressure on ZONE_NORMAL

 

Linux伺服器記憶體使用量量超過閾值,觸發警示。問題排查首先,通過free命令觀察系統的記憶體使用量情況,顯示如下:total       used       free     shared    buffers     cached Mem:      24675796   24587144      88652          0     357012    1612488 -/+ buffers/cache:   22617644    2058152 Swap:      2096472     108224    1988248 其中,可以看出記憶體總量為24675796KB,已使用22617644KB,只剩餘2058152KB。然後,接著通過top命令,shift + M按記憶體排序後,觀察系統中使用記憶體最大的進程情況,發現只佔用了18GB記憶體,其他進程均很小,可忽略。因此,還有將近4GB記憶體(22617644KB-18GB,約4GB)用到什麼地方了呢?進一步,通過cat /proc/meminfo發現,其中有將近4GB(3688732 KB)的Slab記憶體:...... Mapped:          25212 kB Slab:          3688732 kB PageTables:      43524 kB ...... Slab是用於存放核心資料結構緩衝,再通過slabtop命令查看這部分記憶體的使用方式:OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME 13926348 13926348 100%    0.21K 773686       18   3494744K dentry_cache 334040 262056  78%    0.09K   8351       40     33404K buffer_head 151040 150537  99%    0.74K  30208        5    120832K ext3_inode_cache 發現其中大部分(大約3.5GB)都是用於了dentry_cache。問題解決1. 修改/proc/sys/vm/drop_caches,釋放Slab佔用的cache記憶體空間(參考drop_caches的官方文檔):Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache: * echo 1 > /proc/sys/vm/drop_caches To free dentries and inodes: * echo 2 > /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: * echo 3 > /proc/sys/vm/drop_caches As this is a non-destructive operation, and dirty objects are not freeable, the user should run "sync" first in order to 
make sure allcached objects are freed. This tunable was added in 2.6.16. 2. 方法1需要使用者具有root許可權,如果不是root,但有sudo許可權,可以通過sysctl命令進行設定:$sync $sudo sysctl -w vm.drop_caches=3 $sudo sysctl -w vm.drop_caches=0 #recovery drop_caches 操作後可以通過sudo sysctl -a | grep drop_caches查看是否生效。

 

電腦體繫結構 -記憶體最佳化

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.