Memcache memory allocation policy and performance (usage) status checks

Source: Internet
Author: User
Tags memcached

Objective

has been using memcache, but for its internal problems, such as its memory is how to use, after a period of time to look at some of the status how? have been not clear, check and forget, now collate the article, to facilitate their own inspection. This article does not involve installation, operation. Interested students can view the previously written articles and Google.

1. Parameters
memcached-hMemcached 1.4.14- P<num> TCP port, default is 11211, can not set-u <num> UDP port, default is 11211,0 to off-s <file> UNIX socket -A <mask> access mask for UNIX sockets, in octal (default:0700)- L<addr> Listening IP address, this parameter can not be set by this machine-DRun in daemon (daemon) mode-uSpecifies that the user, if currently root, needs to use this parameter to specify the user- M<num> maximum memory usage, in megabytes. Default 64MB-MSuppress LRU policy, return an error when memory is exhausted, instead of deleting an item- C<num> maximum number of simultaneous connections, default is 1024-V verbose (print errors/warnings while in event loop)-VV Very verbose (also print client commands/reponses)-vvv extremely verbose (also print internal state tran sitions)-H Help Information-I print memcached and libevent license-p <file> save PID to specified file- F<factor> growth factor, default 1.25-n <bytes> initial chunk=key+suffix+value+32 struct, default 48 byte-L enable large memory page, can To reduce memory waste and improve performance- T<num> number of threads, default 4.                 Since memcached uses NIO, more multithreading does not have much effect-r each event connection maximum concurrent number, default 20-c disable CAS command (can suppress version count, reduce overhead)-B Set the backlog queue limit (default:1024)-B Binding protocol-one of ASCII, binary or auto (default)- I.Resize Allocation Slab page, default 1M, minimum 1k to 128M

Above the bold parameters, you need to focus on the normal startup example:

/usr/bin/memcached-m 64-p 11212-u nobody-c 2048-f 1.1-i 1024-d-L 10.211.55.9 Connection:telnet 10.211.55.9 11212T Rying 10.211.55.9...Connected to 10.211.55.9.Escape character are ' ^] '.

All parameters can be viewed by command: Stats settings

2. Understanding the memory storage mechanism of memcached

Memcached by default, a mechanism named slab allocator is used to allocate and manage memory. Prior to the advent of this mechanism, the allocation of memory was performed simply by malloc and free for all records. However, this approach can lead to memory fragmentation, aggravating the burden on the operating system memory manager, and in the worst case, cause the operating system to be slower than the memcached process itself. Slab Allocator was born to solve the problem.

The basic principle of Slab allocator is to set the allocated memory in page according to the predetermined size, by default a page is 1M, can be specified by the-i parameter at startup, split into blocks of various sizes (chunk), The same size blocks are divided into groups (chunk), and if you need to request memory, memcached divides a new page and assigns it to the desired slab area. Once the page is allocated, it will not be reclaimed or redistributed before being restarted to resolve the memory fragmentation issue.

Page

The memory space allocated to slab, which is 1MB by default. After assigning to slab, the slab is divided into chunk according to the size of the.

Chunk

The memory space used to cache records.

Slab Class

A group of chunk of a specific size.

Instead of memcached all the data of all sizes together, the data space is pre-divided into a series of slabs, each slab only responsible for a range of data stores. Memcached Select the slab that best fits the data size, depending on the size of the data received. Memcached holds a list of idle chunk within slab, selects chunk based on the list, and caches the data in it.

, each slab only stores data that is larger than its previous slab size and less than or equal to its maximum size. For example: a 100-byte string will be stored in SLAB2 (88-112), each slab responsible space is unequal, memcached by default the next slab maximum is 1.25 times times the previous one, this can be modified by modifying the- f parameter To modify the growth ratio.

Slab allocator solved the original memory fragmentation problem, but the new mechanism also brought new problems to memcached. Chunk is where memcached actually stores the cached data, which is the maximum size of the slab that manages it. The chunk size in each slab is the same, as shown in the SLAB1 chunk size is 88 bytes, and slab2 is 112 bytes. The allocated memory cannot be effectively exploited because it allocates memory of a specific length. For example, by caching 100 bytes of data into a 128-byte chunk, the remaining 28 bytes are wasted. It is important to note that Chunk does not only store the value of the cached object, but also stores the key,expire time, flag, and other details of the cached object. So when set 1 bytes of item, it takes much more than 1 bytes of space to store.

memcached specifies the growth factor factor at startup (with the-f option) to control the difference between slab to some extent. The default value is 1.25.

The slab memory allocation process is as follows:

memcached specifies the maximum memory to use with the- m parameter at startup, but this does not take up as soon as it is started, but is gradually assigned to each slab. If a new data is to be stored, first select a suitable slab, and then see if the slab has idle chunk, if any are stored directly, if not, if there is no request, slab request memory in page units, regardless of size, A page of 1M size will be assigned to the slab (the page will not be recycled or redistributed, and will always belong to the slab). After applying to the page, slab will slice the page's memory by the size of chunk, so it becomes an array of chunk and select one from the chunk array to store the data. If there is no free page, the change slab will be LRU, rather than the entire memcache LRU.

The above describes the memory allocation strategy of Memcache, and the following shows how to view the usage status of Memcache.

3.memcache Status and Performance view

hit ratio:stats Command

Follow the diagram below to interpret the analysis

Get_hits indicates the number of times the cache hit was read, and Get_misses was the number of read failures that attempted to read the nonexistent cache data. That

Hit Rate =get_hits/(get_hits + get_misses)

The higher the hit rate, the greater the cache caching effect. But in actual use, this hit rate is not a valid data hit rate, sometimes get operation may just check a key existence does not exist, this time miss is also correct, this hit rate is starting from memcached start all the request of the composite value, can not reflect a time period of the situation, So to troubleshoot memcached performance problems, you need more detailed values. But the high hit rate still reflects the good usage of the memcached, and the sudden drop in hit rate can reflect the loss of a large number of caches.

② observing the items of each slab: Stats items command

Main parameter Description:

OutOfMemory The number of times the slab class failed to allocate space for the new item. This means that you are running with the-m or the remove operation fails
Number Total number of data stored
Age The time, in seconds, that the data stored in the data has been stored for the longest time
Evicted The number of times the unexpired item has to be removed from the LRU
Evicted_time The number of seconds since the last time the expired item was purged, that is, the last time the cache was removed, and 0 means that it is currently removed, using this to determine the most recent times the data was removed.
Evicted_nonzero The expiration time is not set (the default is 30 days), but the number of times that the unexpired item has to be weighed from the LRU

Because memcached's memory allocation policy causes the total memory of the memcached to reach the maximum memory set , the page that all slab can use is fixed , and if there is data to be put in, Will cause memcached to use the LRU policy to reject the data . While the LRU strategy is not for all slabs, but only for new data should be put into the slab, for example, a new data to be put into Slab 3, the LRU only for Slab 3, through the stats items can be observed in the case of these culling.

Note evicted_time: It is not the case that the LRU is overloaded with code memcached load, because sometimes when using the cache, the expiration time is set to 0, so the cache will be stored for 30 days, and if the memory is full, continue to put the data , and these are outdated data that have not been used for a long time and may be excluded. Convert Evicted_time to standard time to see if you have reached the time you can accept, for example: You think the data is cached for 2 days is acceptable to you, and the last deleted data has been stored for more than 3 days, you can think this slab pressure is actually acceptable But if the last data being culled is only cached for 20 seconds, don't worry, the slab is already overloaded.

You can see the status of the current Memcache slab1 by the instructions above:

Items have 305,816, the most effective time is 21,529 seconds, through the LRU removal of the non-expired items have 95,336,839, through the LRU removal of the expired items have not set the expiration of 95,312,220, there is currently cleared items, Start without the-m parameter.

③ observe the situation of each slabs: Stats slabs Command

If you find an unusual slab from the stats items, you can stats slabs to see if the slab is a memory allocation problem.

Main parameter Description:

Property name Property Description
Chunk_size Current slab size of each chunk
Chunk_per_page The number of chunk each page can hold
Total_pages Total number of page assigned to current slab, default 1 page size 1M, can calculate the size of the slab
Total_chunks The current slab can hold the maximum number of chunk, should be equal to Chunck_per_page * total_page
Used_chunks Total number of chunks already occupied
Free_chunks Expired data vacated chunk but not yet used chunk number
Free_chunks_end Number of chunk that are newly allocated but not yet used

Note here:total_pages This is the total number of page allocations that are currently slab, if the default size of the page is not modified, this is the total size (in m) of the data that the current slab can cache. If this slab is very serious, be sure to note that this slab page number is not too small. There is also a formula:

Total_chunks = used_chunks + free_chunks + free_chunks_end

Exception stats slabs also has 2 properties:

Property name Property Description
Active_slabs Total number of slab active
total_malloced The total amount of memory that has been allocated, in bytes, determines how much memory memcached can actually apply, and if the value has reached the set upper limit (and the MaxBytes contrast in stats settings), no new page will be assigned.

Statistics on the number of ④ objects: stats sizes

Note : This command locks the service and pauses processing requests. This command shows the number of items in a fixed chunk size. You can also see how many chunks are in SLAB1 (96byte).

⑤ view, export all Key:stats cachedump

In the memcache, everyone wants to see the key in the cache, similar to the keys command in Redis, which can be viewed in memcache, but takes 2 steps to complete.

One is to list items first:

Stats Items--command ... STAT items:29:number 228STAT items:29:age 34935...END

The second is to take key through Itemid, the above ID is 29, plus a parameter: for the listed length, 0 for all listed.

Stats Cachedump 0--command ITEM 26457202 [49440 b; 1467262309 S] ... ITEM 30017977 [45992 b; 1467425702 S]item 26634739 [48405 b; 1467437677 s]end  --a total of 228 keys   26634739  Take value

How do I export keys? This is done through the echo ... nc.

echo "stats cachedump 0" NC 10.211.55.9 11212 >/home/zhoujy/memcache.log

When exporting, it is important to note that thecachedump command returns a data size of only 2 m each time, which is a value written in the memcached code, unless it is modified before compiling.

⑥ Another monitoring tool: Memcached-tool, a Perl-written tool:memcache_tool.pl.

#!/usr/bin/perl## memcached-tool:# stats/management tool for memcached.## author:# Brad Fitzpatrick &LT;[EMAIL&NBSP;PR  otected]>## contributor:# Andrey niakhaichyk <[email protected]>## license:# public domain.  I give up the rights to this# tool. Modify and copy at would. #use strict;use io::socket::inet;my $addr = shift;my $mode = Shift | |    "Display", My ($from, $to), if ($mode eq "display") {undef $mode if @ARGV;} elsif ($mode eq "move") {$from = shift;    $to = shift; undef $mode if $from < 6 | |    $from > 17; undef $mode if $to < 6 | |    $to > 17; Print STDERR "error:parameters out of range\n\n" unless $mode;} elsif ($mode eq ' dump ') {;} elsif ($mode eq ' stats ') {;} elsif ($mode eq ' settings ') {;} elsif ($mode eq ' size S ') {;} else {undef $mode;} undef $mode if @ARGV;d ie "Usage:memcached-tool 
./memcached-tool 10.211.55.9:11212--Execution    # item_size Max_age Pages Count full? Evicted Evict_time OOM 1 96B 20157s 305816 Yes 95431913 0 0 2 120B 16049s 4 0 349520 Yes 117041737 0 0 3 152B 17574s 269022 Yes 92679465 0 0 4 192        B 18157s 234823 Yes 78892650 0 0 5 240B 18722s 227188 Yes 72908841 0 0 6 304B 17971s 251777 Yes 85556469 0 0 7 384B 17881s bayi 221130 y      Es 75596858 0 0 8 480B 17760s 152880 Yes 53553607 0 0 9 600B 18167s 101326 Yes 34647962 0 0 752B 18518s 72488 Yes 24813707 0 0 11 9        44B 18903s 57720 Yes 16707430 0 0 1.2K 20475s 38940 Yes 11592923 0 0 1.4K 21220s 25488 Yes 8232326 0 0 1.8K 22710s 19740 Yes 6232766 0 0 2.3K 22027s 14883 Yes 4952017 0 0 2.8K 23139s 11913 Yes 3822663 0 0 3.5K 23495s 8928 Yes 281    7520 0 0 4.4K 22611s 6670 Yes 2168871 0 0 5.5K 23652s 29     5336 Yes 1636656 0 0 6.9K 21245s 3822 Yes 1334189 0 0 8.7K    22794s 2596 Yes 783620 0 0 10.8K 22443s 1786 Yes 514953 0   0 13.6K 21385s 1350 Yes 368016 0 0 16.9K 23782s 960 Yes     254782 0 0 21.2K 23897s 672 Yes 183793 0 0 26.5K 27847s 13     494 Yes 117535 0 0 33.1K 27497s 420 Yes 83966 0 0 41.4K 28246s 14 336 Yes 63703 0 0 51.7K 33636s 228 Yes 24239 0 0 

Explain:

Column Meaning
# Slab class Number
Item_size Chunk size
Max_age Lifetime of oldest record in LRU
Pages Number of pages assigned to slab
Count Number of records, number of chunks, items, keys in slab
Full? Whether the slab contains idle chunk
Evicted The number of times the non-expired item was removed from the LRU
Evict_time The last time the cache was removed, 0 means that it is currently removed
OOM -m parameter?
4. Summary

In the actual application of memcached, we encountered many problems are due to not understand its memory allocation mechanism, I hope this article can let everyone preliminary understanding of memcached in memory convenient allocation mechanism, While some NoSQL database products, such as Redis, have replaced memcache in many products, there are many projects that memcache rely on, so they have to learn to solve the problem, and new content will be updated on an occasional basis.

Memcache memory allocation policy and performance (usage) status checks

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.