Memcached-distributed memory object Cache System

Source: Internet
Author: User
Tags memcached stats perl script

What is memcached?

Memcached is a software developed by Brad fitzpatric, danga interactive under livejournal. It has become an important factor in improving the scalability of Web applications among many services such as Mixi, hatena, Facebook, Vox, and livejournal.

Many Web applications save data to RDBMS. The application server reads data from the data and displays it in the browser. However, as the data volume increases and access is concentrated, the burden on RDBMS increases, the database response deteriorates, and the website display latency.

In this case, memcached is ready to use. Memcached is a high-performance distributed memory cache server. The general purpose is to reduce the number of database accesses by caching database query results, so as to speed up dynamic web applications and improve scalability.


Figure 1 General Purpose of memcached

In fact, it is not very complicated. It is a database cache system built with software.


Memcached features

As a high-speed distributed cache server, memcached has the following features.
Simple Protocol
Libevent-based event processing
Built-in memory storage
Distributed memcached does not communicate with each other

Simple Protocol
For memcached Server Client Communication, simple text line-based protocols are used instead of complex XML and other formats. Therefore, you can use Telnet to save and retrieve data on memcached. The following is an example.

$ Telnet local host 11211
Connected to localhost. localdomain ( ).
Escape Character is '^]'.
Set Foo 0 0 3 (save command)
Bar (data)
Stored (result)
Get Foo (GET command)
Value Foo 0 3 (data)
Bar (data)

The protocol document is in the memcachedSource codeYou can also refer to the following URL.



Libevent-based event processing

Libevent isProgramLibrary, which encapsulates the epoll, BSD, kqueue, and other event processing functions of Linux into a unified interface. O (1) performance can be used even if the number of connections to the server increases. Memcached uses this libevent library to achieve high performance in Linux, BSD, Solaris, and other operating systems. For more information about event processing, see the c10k problem of Dan keel.

Libevent: /~ Provos/libevent/

The c10k problem:


Built-in memory storage

To improve performance, data stored in memcached is stored in the memory storage space built in memcached. Because the data only exists in the memory, restarting memcached and the operating system will cause all data to disappear. In addition, after the content capacity reaches the specified value, it is based on LRU (least recently used)AlgorithmThe unused cache is automatically deleted. Memcached itself is a server designed for caching, so it does not take the permanent data into consideration. For more information about memory storage, refer to the introduction in the future.


Distributed memcached does not communicate with each other

Although memcached is a "distributed" cache server, the server does not have distributed functions. Memcached does not communicate with each other to share information. So, how to implement distributed? This depends entirely on the implementation of the client. This section also describes the distribution of memcached.

Figure 2 distributed memcached

Next, we will briefly introduce how to use memcached.


Install memcached

Memcached is easy to install.

Memcached supports many platforms.
Solaris (memcached 1.2.5 or later)
Mac OS X
It can also be installed on Windows. Here, Fedora Core 8 is used for description.


Install memcached

To run memcached, you must first introduce the libevent library. There is a ready-made RPM package in fedora 8. You can install it using the yum command.

$ Sudo Yum install libevent-devel

Memcached SourceCodeIt can be downloaded from the memcached website. The latest version of this article is 1.2.5. Although RPM ora 8 also contains the memcached rpm, the version is old. Because it is not difficult to install the source code, rpm is not used here.

Download memcached:

The memcached installation is the same as that of common applications. You can use configure, make, and make install.

$ Wget
$ Tar zxf memcached-1.2.5.tar.gz
$ Memcached-1.2.5 CD
$ Make
$ Sudo make install
By default, memcached is installed in/usr/local/bin.


Start memcached

Enter the following command from the terminal to start memcached.

$/Usr/local/bin/memcached-P 11211-M 64 m-VV
Slab Class 1: Chunk Size 88 perslab 11915
Slab Class 2: Chunk Size 112 perslab 9362
Slab Class 3: Chunk Size 144 perslab 7281
Slab Class 38: Chunk Size 391224 perslab 2
Slab class 39: Chunk Size 489032 perslab 2
<23 server listening
<24 send buffer was 110592, now 268435456
<24 server listening (UDP)
<24 server listening (UDP)
<24 server listening (UDP)
<24 server listening (UDP)

Debugging information is displayed here. In this way, memcached is started on the front-end and the maximum memory usage of TCP port 11211 is 64 MB. Most of the debugging information is about the storage information.

When started as a daemon background, you only need

$/Usr/local/bin/memcached-P 11211-M 64 m-d

The content of the memcached startup option used here is as follows.

Parameter description

The TCP port used by-P. The default value is 11211.

-MB maximum memory size. The default value is 64 MB.

-VV is started in very vrebose mode. debugging information and errors are output to the console.

-D is started as a daemon in the background

The above four are commonly used startup options, and there are many other options.

Command to display. Many options can change the behaviors of memcached. Read-Only is recommended.


Connect with a client

In many languages, clients connected to memcached are implemented, including Perl and PHP. Only the languages listed on the memcached website have

Perl, PHP, Python, Ruby, C #, C/C ++, and Lua.

Memcached client API:

Here we will introduce how to link memcached through the Perl library being used by Mixi.

Use cache: memcached

The memcached client of Perl has

Cache: memcached

Cache: memcached: fast

Cache: memcached: libmemcached

And other CPAN modules. Cache: memcached is the work of Brad fitzpatric, creator of memcached. It should be regarded as the most widely used module in the memcached client.


Use cache: memcached to connect to memcached

The source code below is an example of connecting memcached just started through cache: memcached.

#! /Usr/bin/perl

Use strict;

Use warnings;

Use cache: memcached;

My $ key = "foo ";

My $ value = "bar ";

'My $ expires = 3600; #1 hour

My $ memcached = cache: memcached-> New ({

Servers => [" 11211"],

Compress_threshold => 10_000


$ Memcached-> Add ($ key, $ value, $ expires );

My $ ret = $ memcached-> get ($ key );

Print "$ RET \ n ";

Here, the IP address of the memcached server and an option are specified for Cache: memcached to generate an instance. Cache: Common memcached options are as follows.

Option description

Servers uses arrays to specify the memcached server and Port

Compress_threshold value used for Data Compression

Namespace specifies the prefix to add to the key

In addition, cache: memcached can serialize and save complex Perl data through the Storable module. Therefore, hash, array, and object can be directly stored in memcached.


Save data

The methods for saving data to memcached are as follows:




They are used in the same way:

My $ add = $ memcached-> Add ('key', 'value', 'deadline ');

My $ replace = $ memcached-> Replace ('key', 'value', 'duration ');

My $ set = $ memcached-> set ('key', 'value', 'deadline ');

You can specify the period (in seconds) when saving data to memcached ). If the period is not specified, memcached saves data according to the LRU algorithm. The differences between the three methods are as follows:

Option description

Add is saved only when no data with the same key exists in the bucket.

Replace is saved only when data with the same key exists in the bucket.

Set is different from ADD and replace.

Get Data

You can use the get and get_multi methods to obtain data.

My $ val = $ memcached-> get ('key ');

My $ val = $ memcached-> get_multi ('key 1', 'key 2', 'key 3', 'key 4', 'key 5 ');

Use get_multi to retrieve multiple data records at a time. Get_multi can obtain multiple key values synchronously, Which is dozens of times faster than loop get.

Delete data

The delete method is used to delete data, but it has a unique function.

$ Memcached-> Delete ('key', 'blocking time (seconds )');

Deletes the data of the key specified by the first parameter. The second parameter specifies a time value. You cannot use the same key to save new data. This function can be used to prevent incomplete cached data. However, note that the Set function ignores the blocking and stores data as usual.

Add and subtract operations

You can use a specific key value on memcached as a counter.

My $ ret = $ memcached-> incr ('key ');

$ Memcached-> Add ('key', 0) Unless defined $ ret;

Increment and subtract 1 are atomic operations, but when the initial value is not set, it is not automatically assigned to 0. Therefore, an error check should be performed and initialization should be performed if necessary. In addition, the server does not check the behavior that exceeds 2 32.



This article briefly introduces memcached and its installation method and the usage of Perl Client Cache: memcached. As long as you know, memcached is easy to use.

Next, we will explain the internal structure of memcached from the beginning. By understanding the internal structure of memcached, you can know how to use memcached to accelerate web applications.



Understanding memcached memory storage:
Memcached deletion mechanism and development direction:


Memcached comprehensive analysis-2. Understanding memcached's memory storage slab allocation mechanism: sort out the memory for Reuse

By default, the latest memcached uses the Slab allocator mechanism to allocate and manage memory. Before the emergence of this mechanism, the memory is allocated by simply malloc and free all records. However, this method will cause memory fragmentation and increase the burden on the memory manager of the operating system. In the worst case, the operating system will be slower than the memcached process itself. Slab allocator was born to solve this problem.

Next let's take a look at the Slab allocator principle. Below are the objectives of Slab allocator in the memcached document:

The primary goal of the slabs subsystem in memcached was to eliminate memory fragmentation issues totally by using fixed-size memory chunks coming from a few predetermined size classes.

That is to say, the basic principle of Slab allocator is to divide the allocated memory into blocks of a specific length according to the predefined size to completely solve the memory fragmentation problem.

The principle of slab allocation is quite simple. Divide the allocated memory into chunk blocks of various sizes and divide the chunk blocks into groups (figure 1 ).

Figure 1 Structure of slab allocation

In addition, Slab allocator can reuse allocated memory. That is to say, the allocated memory is not released, but reused.

Slab allocation terminology


Memory space allocated to slab. The default value is 1 MB. After being allocated to slab, it is split into chunks based on the size of slab.


Memory space used to cache records.

Slab class

A chunk group of a specific size.

Principle of caching records in Slab

The following describes how memcached selects slab for the data sent by the client and caches it to chunk.

Memcached selects the slab that best fits the data size based on the size of the received data (figure 2 ). Memcached stores the list of idle chunks in slab. Select chunks based on the list and then cache the data.

Figure 2 method for selecting a group for storing records

In fact, Slab allocator also has advantages and disadvantages. The following describes its shortcomings.

Disadvantages of Slab allocator

Slab allocator solves the original memory fragmentation problem, but the new mechanism also brings new problems to memcached.

This problem occurs because the allocated memory is of a specific length, so the allocated memory cannot be effectively used. For example, if 100 bytes of data are cached to 128 bytes of chunk, the remaining 28 bytes are wasted (Figure 3 ).

Figure 3 usage of chunk Space

There is no perfect solution for this problem, but the document records a more effective solution.

The most efficient way to reduce the waste is to use a list of size classes that closely matches (if that's at all possible) common sizes of objects that the clients of this particle installation of memcached are likely to store.

That is to say, if you know the public size of the data sent by the client in advance, or only cache the data of the same size, you only need to use a list of groups suitable for the data size to reduce waste.

However, it is a pity that no optimization can be performed now. We can only look forward to future versions. However, we can adjust the differences in slab class size. The following describes the growth factor option.

Use growth factor for Tuning

Memcached specifies the growth factor (through the-F option) at startup to control the differences between slab to some extent. The default value is 1.25. However, before this option appeared, this factor was fixed to 2, called the "Powers of 2" policy.

Let's start memcached in verbose mode with the previous settings:


$ Memcached-F 2-VV

The following figure shows the verbose output after startup:

Slab Class 1: Chunk Size 128 perslab 8192 slab Class 2: Chunk Size 256 perslab 4096 slab Class 3: chunk size 512 perslab 2048 slab Class 4: chunk size 1024 perslab 1024 slab Class 5: chunk Size 2048 perslab 512 slab Class 6: Chunk Size 4096 perslab 256 slab Class 7: Chunk Size 8192 perslab 128 slab Class 8: Chunk Size 16384 perslab 64 slab Class 9: chunk Size 32768 perslab 32 slab class 10: Chunk Size 65536 perslab 16 slab class 11: Chunk Size 131072 perslab 8 slab Class 12: Chunk Size 262144 perslab 4 slab class 13: chunk Size 524288 perslab 2

It can be seen that, starting from the 128-byte group, the size of the group increases to 2 times that of the original group. The problem with this setting is that the slab is quite different and sometimes it is quite a waste of memory. Therefore, to minimize memory waste, the growth factor option was added two years ago.

Let's take a look at the output of the current default settings (F = 1.25) (limited space, only 10th groups are written here ):

Slab Class 1: Chunk Size 88 perslab 11915 slab Class 2: Chunk Size 112 perslab 9362 slab Class 3: Chunk Size 144 perslab 7281 slab Class 4: Chunk Size 184 perslab 5698 slab Class 5: chunk Size 232 perslab 4519 slab Class 6: Chunk Size 296 perslab 3542 slab Class 7: Chunk Size 376 perslab 2788 slab Class 8: Chunk Size 472 perslab 2221 slab Class 9: chunk Size 592 perslab 1771 slab class 10: Chunk Size 744 perslab 1409

It can be seen that the gap between groups is much smaller than the factor of 2, and it is more suitable for caching hundreds of bytes of records. From the above output, some calculation errors may be found. These errors are intentionally set to keep the number of bytes aligned.

When you introduce memcached into a product or directly use the default value for deployment, it is best to recalculate the expected average length of data and adjust the growth factor for the most appropriate settings. Memory is a precious resource, which is a waste of resources.

Next we will introduce how to use the memcached stats command to view various information such as slabs utilization.

View the internal status of memcached

Memcached has a command named stats, which can be used to obtain various information. There are many ways to execute commands. telnet is the simplest:


$ Telnet host name port number

For more information, see protocol.txt in the memcachedsoftware package.


$ Telnet localhost 11211 trying: 1... connected to localhost. escape Character is '^]'. stats stat PID 481 stat uptime 16574 stat time 1213687612 stat version 1.2.5 stat pointer_size 32 stat rusage_user 0.102297 stat rusage_system 0.214317 stat curr_items 0 stat limit 0 stat bytes 0 stat curr_connections 6 stat total_connections 8 stat connection_structures 7 stat every _get 0 stat every _set 0 stat get_hits 0 stat get_misses 0 stat evictions 0 stat bytes_read 20 stat bytes_written 465 stat limit_maxbytes 67108864 stat threads 4 end quit

In addition, if the libmemcached client library for C/C ++ is installed, the memstat command is installed. It is easy to use. You can use fewer steps to obtain the same information as telnet and  from multiple servers at a time.


$ Memstat -- servers = server1, server2, server3 ,...

Libmemcached can be obtained from the following address:

    • Http://
View slabs usage

Using memcached to create a Perl script written by Brad named memcached-tool, you can easily get the usage of Slab (it organizes the returned values of memcached into easy-to-read formats ). You can obtain the script from the following address:

    • Http://

The usage is extremely simple:


$ Memcached-tool Host Name: port option

You do not need to specify an option to view slabs usage. Use the following command:


$ Memcached-tool Host Name: Port

The obtained information is as follows:

# Item_size max_age 1mb_pages count full? 1 104 B 1394292 s 1215 12249628 Yes 2 136 B 1456795 s 52 400919 Yes 3 176 B 1339587 s 33 196567 Yes 4 224 B 1360926 s 109 510221 Yes 5 280 B 1570071 s 49 183452 Yes 6 352 B 1592051 s 77 229197 Yes 7 440 B 1517732 s 66 157183 Yes 8 552 B 1460821 s 62 117697 Yes 9 696 B 1521917 s 143 215308 yes 10 872 B 1695035 s 205 246162 Yes 11 1.1 kb 1681650 s 233 221968 yes 12 1.3 kb 1603363 s 241 Yes 13 183621 kb 1.7 s 94 1634218 Yes 14 57197 kb 2.1 S 75 1695038 Yes 15 36488 kb 2.6 S 65 1747075 Yes 16 3.3 kb 1760661 s 78 24167 Yes

The meaning of each column is:

Column Description
# Slab class no.
Item_size Chunk Size
Max_age Survival time of the oldest record in LRU
1mb_pages Number of pages allocated to slab
Count Number of records in Slab
Full? Whether the slab contains idle chunk

The information obtained from this script is very convenient for tuning and is strongly recommended.




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.