Memcache storage mechanism and instruction summary

Source: Internet
Author: User
Tags delete key memcached memory usage

1, Memcache Basic Introduction

Memcached is a high-performance distributed memory cache server. The general purpose is to reduce the number of database accesses by caching database query results to improve the speed and scalability of dynamic Web applications.

Memcache diagram of the operation:

Characteristics of Memcache

Memcached, as a distributed cache server running at high speed, has the following characteristics.

1, based on C/S architecture protocol Simple

memcached Server client communication does not use a format such as complex XML, but uses a simple text-line-based protocol. As a result, you can also save data and get data on memcached by using Telnet.

2. Event handling based on Libevent

Libevent is a library that encapsulates event-handling functions such as Linux's Epoll, BSD-class operating system kqueue, and so on as a unified interface. The Performance of O (1) can be played even if the number of connections to the server increases. Memcached uses this libevent library, so it can perform its high performance on Linux, BSD, Solaris and other operating systems.

3. Internal Memory storage mode

To improve performance, the data saved in memcached is stored in Memcached's built-in memory storage space. Because the data exists only in memory, restarting the memcached and restarting the operating system will cause all data to disappear. In addition, when the content capacity reaches the reference value, the unused cache is automatically deleted based on the LRU (Least recently used) algorithm. The memcached itself is a server designed for caching, so there is not too much consideration for permanent data issues.

4, memcached not communicate with each other distributed

Memcached Although it is a "distributed" cache server, there is no distributed functionality on the server side. Each memcached does not communicate with each other to share information. So, how to distribute it? This depends entirely on the client's implementation. (as shown)

Back to Top 2. Understanding Memcache Memory StorageBack to Top 2.1, storage mechanism

Memcache uses the slab allocator method to store data. This mechanism can be used to clean up memory for reuse, which solves the problem of memory fragmentation. Prior to the advent of this mechanism, the allocation of memory was performed simply by malloc and free for all records. However, this approach can lead to memory fragmentation, aggravating the burden on the operating system memory manager, and in the worst case, cause the operating system to be slower than the memcached process itself.

2.2  ,Slab Allocator Fundamentals

1, according to the predetermined size, the allocated memory in the page (the default per page 1M) is divided into specific blocks (chunk), and the same size of the chunk into groups (chunk collection);

2. When storing data, it will look for the chunk area similar to the value size to store;

3. Once the memory is allocated as a page, it will not be reclaimed or redistributed before restarting to resolve the memory fragmentation problem. (Allocated memory is not freed, but reused)

Back to Top 2.3, understanding four nouns

"Can be understood by referring to the image analysis below"

Slab

The largest size data used to represent the store, just for definition (in layman's words, a range that can store the size of the data). By default, the front and back two slab indicate that the stored size grows by 1.25 times times. For example, SLAB1 is 96 bytes, SLAB2 is 120 bytes

Page

The memory space allocated to slab, which defaults to 1MB. The slab will be cut into chunk according to the size of the slab.

Chunk

Memory space for Cache records

Slab CALSS

Chunk collection of specific sizes

Back to Top 2.4,Slab memory allocation process

memcached specifies the maximum memory to use with the- m parameter at startup, but this does not take up as soon as it is started, but is gradually assigned to each slab. If a new data is to be stored, first select a suitable slab, and then see if the slab has idle chunk, if any are stored directly, if not, if there is no request, slab request memory in page units, regardless of size, A page of 1M size will be assigned to the slab (the page will not be recycled or redistributed, and will always belong to the slab). After applying to the page, slab will slice the page's memory by the size of chunk, so it becomes an array of chunk and select one from the chunk array to store the data. If there is no free page, the change slab will be LRU, rather than the entire memcache LRU.

image Analysis Diagram: (this picture does not make good, ha, not very professional 2333 )

Back to Top 2.5,Memcache Storage specific process

Instead of memcached all the data of all sizes together, the data space is pre-divided into a series of slabs, each slab only responsible for a range of data stores. Memcached Select the slab that best fits the data size, depending on the size of the data received. If the slab still has a list of idle chunk, select chunk based on the list, and then cache the data in it; if not, apply page (1M) "can refer to the image I painted above figure 23333"

Specific analysis: From the above we understand the role of slab. The growth factor for slab is increased by 1.25 times times by default. Then why does it cause some not 1.25 times times? The answer is affected by decimals, and you can use the-f int to test for an integer growth factor to see the effect. "After the detailed explanation"

For analysis, such as 112 bytes in slab, indicates that value can be stored that is greater than 88 bytes and less than or equal to 112 bytes.

Back to Top 2.6,Slab Allocator shortcomings

Slab allocator solved the original memory fragmentation problem, but the new mechanism also brought new problems to memcached.

The problem is that the allocated memory cannot be effectively exploited because it allocates memory of a specific length. For example, if you cache 100 bytes of data into a 128-byte chunk, the remaining 28 bytes are wasted (as shown).

Back to Top 2.7, using the- f growth factor for tuning

The growth factor is a multiplier of the growth between the adjacent two chunk. This parameter memcache default is 1.25, but we first use integer second Test, see the effect.

From the figure we can see that the growth of chunk size is twice times.

Let's look at the effect of-F 1.25 again.

Why can't the 1.25 times-fold growth factor guarantee that all adjacent chunk size is 1.25 times times the growth?

Because these errors are deliberately set to keep the number of bytes aligned.

A comparison of the two graphs shows that the factor of 1.25 is much smaller than the factor of 2 o'clock, and is more suitable for caching hundreds of bytes of records.

Therefore, when using memcached, it is best to recalculate the expected average length of the data and adjust the growth factor to get the most appropriate settings.

Back to top 3, memcache delete mechanism

From the above we know that the allocated memory is not released for recycling, the record time-out, the client will not see the record, its storage space can be reused.

Back to Top 3.1,Lazy expiration

Memcached internally does not monitor whether the record is out of date, but instead looks at the timestamp of the record at get and checks whether the record is out of date. This technique is called lazy (lazy) expiration. As a result, memcached does not consume CPU time on outdated monitoring.

Back to Top 3.2,LRU Delete

Memcached takes precedence over the space of a record that has timed out, but even so, there is a lack of space when appending a new record, and a space is allocated using the least recently used (LRU) mechanism. As the name implies, this is the mechanism for deleting "least recently used" records. Therefore, when memcached has insufficient memory space (when it cannot get new space from the Slab Class), it searches from records that have not been used recently and allocates its space to new records. From a practical point of view of caching, the model is ideal.

In some cases, however, the LRU mechanism can cause trouble. Memcached the "-M" parameter at startup to prohibit LRU.

It is important to note at startup that the lowercase "-m" option is used to specify the maximum memory size. The default value of 64MB is used if no specific value is specified.

Specifies that when the "-M" parameter is started, memcached returns an error when the memory is exhausted. In other words, memcached is not a memory, but a cache, so it is recommended to use LRU.

Back to top 4, start memcache parameters

"The parameters of bold words are more commonly used"

-p<num>

Listening TCP port (default: 11211)

-u<num>

UDP Listening Port (default: 11211 0 off)

-D

Run in daemon mode

-u<username>

Specify that users run

-m<num>.

Maximum memory usage, in megabytes. Default 64MB

-c<num>

Maximum number of simultaneous connections, default is 1024

-V

Output warnings and error messages

-vv

Print client request and return information

-H

Help information

-l<ip>

Binding address (accessible by default for any IP address)

-p<file>

Save the PID in file

-I.

Print memcached and libevent copyright information

-M

Suppress LRU policies, return errors when memory runs out

-f<factor>

Growth factor, default 1.25

-n<bytes>

Initial chunk=key+suffix+value+32 struct, default 48 bytes

-L

Enable large memory pages to reduce memory waste and improve performance

-L

Resize Allocation Slab page, default 1M, minimum 1k to 128M

-t<num>

Number of threads, default 4. Since memcached uses NIO, more multithreading does not have much effect

-R

Maximum number of concurrent connections per event, default 20

-C

Disable CAS command (can suppress version count, reduce overhead)

-B

Set the backlog queue limit (default:1024)

-B

Binding Protocol-one of ASCII, binary or auto (default)

-s<file>

UNIX sockets

-a<mask>

Access mask for UNIX sockets, in octal (default:0700)

Back to Top 5, memcache instruction summary

Instructions

Describe

Example

Get key

#返回对应的value

Get MyKey

Set key identifier valid time length

Key does not exist, an update exists

Set MyKey 0 60 5

Add key identifier valid time length

#添加key-value value, return stored/not_stored

Add MyKey 0 60 5

Replace key identifier valid time length

#替换key中的value, key exists successfully returned Stored,key no failed return not_stored

Replace MyKey 0 60 5

Append key Identifier valid time length

#追加key中的value值, successfully returned stored, failed to return not_stored

Append MyKey 0 60 5

Prepend Key Identifier valid time length

#前置追加key中的value值, successfully returned stored, failed to return not_stored

Prepend MyKey 0 60 5

INCR Key num

#给key中的value增加num. If the key is not a number, NUM will be used to replace the value. Returns the added value

Incre MyKey 1

Decr

#同上

Ditto

Delete key [Key2 ...]

Delete one or more key-value. Successful delete returns deleted, no failure is returned not_found

Delete MyKey

Flush_all [Timeount]

#清除所有 [Timeout time] key value, but does not delete items, so memcache still occupies memory

Flush_all 20

Version

#返回版本号

Version

Verbosity

#日志级别

Verbosity

Quit

#关闭连接

Quit

Stats

#返回Memcache通用统计信息

Stats

Stats Slabs

#返回Memcache运行期间创建的每个slab的信息

Stats Slabs

Stats items

#返回各个slab中item的个数, and oldest item of seconds

Stats items

Stats malloc

#显示内存分配数据

Stats malloc

Stats detail [On|off|dump]

#on: Turn on verbose operation Record, off: Close detailed operation record, dump show verbose operation record (number of Get, set, hit, Del for each key)

Stats detail on

Stats detail off

Stats Detail Dump

Stats Cachedump slab_id Limit_num

#显示slab_id中前limit_num个key

Stats Cachedump 1 2

Stats Reset

#清空统计数据

Stats Reset

Stats settings

#查看配置设置

Stats settings

Stats sizes

#展示了固定chunk大小中的items的数量

Stats Sizes

Note: An identifier: a hexadecimal unsigned integer (denoted in decimal) that needs to be stored with the data and returned together with the get

PS: Recently always thinking about the direction of the future, feeling a little confused, can not study hard. To adjust the good mentality as soon as possible, do not impetuous, haste.

Resources:

1, memcached principle and use of detailed heiyeluren (night passers-by)

Http://blog.csdn.net/heiyeshuwu

2, memcached comprehensive analysis of the Nagano Masahiro, the former Sakamoto Charlee translation

3. Memcache memory allocation policy and performance (use) status check Jyzhou

 

Memcache storage mechanism and instruction summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.