Let's take a look at what is called Fragmentation of memory:
1 If you use the C language direct malloc,free to apply and free memory to the operating system, 2 in the process of continuous application and release, some very small memory fragments are formed, which can no longer be used. 3 This idle, but unusable memory phenomenon,---known as fragmentation of memory.
Therefore, in order to solve the waste caused by this memory fragmentation, the MC uses slab allocator to mitigate the fragmentation of memory.
MC uses the slab allocator mechanism to manage the memory
Let's take a look at the principle of slab allocator:
1 the trailer divides the memory into a few slabclass warehouses. (1M per slabclass size) 2 each warehouse, cut into small pieces of different sizes (chunk). 3 When you need to save content, judge the size of the content, and select a reasonable warehouse for it.
Let's take a look at the official map, a clear picture of a situation in which it can be directly seen in the partitioning of its memory:
How does the system choose the right chunk?
1 memcached Select the chunk group (Slabclass) that best fits the data size, based on the size of the data received. 2memcached holds a list of idle chunk in Slabclass, selects an empty chunk based on the list, and then caches the number 3 in it.
Let's take a look at the diagram above, and when you cache a 100-byte data, the MC stores it in the smallest unit that can hold the byte.
1 Warning: 2 If you have 100byte of content to save, but the chunk in the 122-size warehouse is full 3 and will not look for larger warehouses like 144 to store, 4
Memory waste caused by fixed size chunk:
Due to the slab allocator mechanism, the size of the allocated chunk is fixed, and therefore, for a particular item, it can result in wasted space.
For example, when we cache 100 bytes of data into a 122-byte chunk, the remaining 22 bytes are wasted
The problem of waste of the above chunk cannot be solved completely, only can alleviate the problem.
The developer can count the length of the cached item in the site and make a reasonable chunk size in the slab class.
It's a pity. We are not yet able to customize the size of the chunk, but we can use parameters to adjust the growth rate of the chunk size in each slab class, that is, the growth factor, grow factor
Grow Factor Tuning:
Memcached can specify the growth Factor factor at startup with the F option and, to some extent, control the slab
Differences between the two. The default value is 1.25. However, before this option appears, this factor was once fixed to 2, called "Powersof2"
Strategy.
We use grow factor for 2 and 1.25来 to see the effect:
1 2 VVV 2 1 - 8192 3 2 the 4096 4 3 + 2048 5 4 1024x768 1024x768
As you can see, starting with a 128-byte group, the size of the group increases to twice times the original.
Look at the output of f=1.25:
1 memcached-f 1.25 -VVV 2 Slab class 1 : Chunk size Span style= "color: #800080;" >88 perslab 3 Slab class Span style= "color: #800080;" >2 : Chunk size 112 perslab 9362 Span style= "color: #008080;" >4 Slab class 3 : Chunk size 144 Perslab 7281 4 : Chunk size 184 perslab 5698
In contrast, when f=2, the chunksize in each slab grow very fast, and in some cases it is quite a waste of memory.
Therefore, we should carefully statistics the size of the cache, to establish a reasonable growth factor.
1 Note: 2 when f=1. 25 o'clock, from the output, the size ratio of some adjacent slab class is not 1. Some computational errors may be felt, and these errors are deliberately set to keep the number of bytes aligned.
memcached lazy deletion of outdated data:
1 1 : When a value expires, it is not removed from memory, so when stats statistics, Curr_item has its information 2 2 : When a new value is used to occupy his position, it is considered an empty chunk to occupy. 3 3: When the Get value, determine whether to expire, if expired, return empty, and empty, Curr_item is reduced.
1 that is-- this expires, just so that the user does not see the data, and not immediately after the expiration of the memory removed from the RAM. 2 This is called lazyexpiration, inert failure. 3 Benefits---Save on CPU time and detection costs
Memcached the LRU removal mechanism used here:
If the 122byte size of the chunk example, 122 of the chunk are full, and a new value (length of 120) to join, to squeeze out who?
1 memcached The LRU removal mechanism used here. 2 (Operating system memory management, Common FIFO,LRU Delete) 3 Lru:least Recently used least recently used 4 Fifo:first In,first out
Principle: When a unit is requested, maintain a counter to determine who has been least recently used. just kick somebody out.
1 Note: Even if a key is set to a permanent expiration date, it will be kicked out as well! 2 that is--permanent data is kicked phenomenon
Some parameter limitations in memcached:
1 length of key: 250 bytes, (binary protocol supports 65,536 bytes) 2 the limit of value: 1m, is generally stored some text, such as news list and so on, this value is enough. 3 Memory Limit: 32-bit Max setting to 2g. 4 5 If there is 30g data to be cached, it is generally not a single instance of 30g, (do not put eggs in a basket), 6 generally recommended to open multiple instances (can be on different machines, or different ports on the same machine)
Memory management and deletion mechanism of MC