Memcache Store a single key, slow performance when the amount of data is too large! And the simple memcache is not suitable for the scene.

Source: Internet
Author: User
Tags cas cpu usage virtual private server

Today, someone asked me: memcache storage of large data volume, 10k,100k,1m, when the effect??
I replied: Not good, the effect is very slow.
The other side asked: why AH??
I can't answer it ... So I found some information.

memcached use the knowledge that needs attention:

1, the basic settings of memcached
1) Start the server side of the Memcache

#/usr/local/bin/memcached-d-M 10-u root-l 192.168.0.200-p 12000-c 256-p/tmp/memcached.pid

The-D option is to start a daemon,
-M is the amount of memory allocated to Memcache, in megabytes, I'm 10MB,
-U is the user running memcache, I am root here,
-L is the server IP address of the listener, if there are multiple addresses, I specify the server IP address 192.168.0.200,
-P is the port that sets Memcache listening, I set here 12000, preferably more than 1024 ports,
The-c option is the maximum number of concurrent connections to run, the default is 1024, I set the 256 here, according to the load of your server to set,
-P is set to save memcache PID file, I am here to save in/tmp/memcached.pid,

2, the application of memcached business scenarios?

1) If the site contains a dynamic Web page with a large amount of traffic, the load on the database will be high. Because most database requests are read, memcached can significantly reduce the database load.

2) If the load on the database server is low but the CPU usage is high, you can cache the computed results (computed objects) and the rendered page template (enderred templates).

3) Use memcached to cache session data and temporary data to reduce write operations to their databases.

4) cache some small but frequently accessed files.

5) caching the results of web ' services ' (non-IBM advertised Web services, translator notes) or RSS feeds:

3, not applicable to memcached business scenarios?

1) The size of the cache object is greater than 1MB

Memcached itself is not designed to handle huge multimedia (large media) and huge binary blocks (streaming huge blobs).

2) key is longer than 250 characters

3) Virtual host does not let run memcached service

If the application itself is hosted on a low-end virtual private server, such virtualization technologies like VMware are not suitable for running memcached. Memcached needs to take over and control large chunks of memory, and if memcached managed memory is swapped out by the OS or hypervisor, the performance of the memcached will be compromised.

4) application running in an unsafe environment

Memcached to provide any security policy, the memcached can be accessed only via Telnet. If your app is running on a shared system, you need to focus on security issues.

5) The business itself needs to be persistent data or need to be database

4. Unable to traverse all the item in memcached

This operation is relatively slow and blocks other operations (here, slower than memcached other commands). Memcached all non-debug (non-debug) commands, such as add, set, Get, Fulsh, and so on, regardless

How much data is stored in the memcached, and their execution consumes only constant time. The amount of time spent executing a command that traverses all of the item will increase as the volume of data in the memcached increases. Blocking occurs when other commands cannot be executed because of waiting (the command to traverse all item finishes).

5. The maximum length of a key that memcached can accept is 250 characters

The maximum length of a key that memcached can accept is 250 characters. It is important to note that 250 is a limitation within the memcached server side. If you are using a memcached client that supports "key prefixes" or similar features, the maximum length of a key (prefix + original key) can be more than 250 characters. It is recommended to use shorter keys, which saves memory and bandwidth.

6. The size of a single item is limited to 1M bytes

Because this is the algorithm for the memory allocator.

A detailed answer:

1) memcached memory storage engine, use slabs to manage memory. Memory is divided into slabs chunks of unequal size (first divided into slabs of equal size, then each slab is divided into equal size chunks, slab of different chunk size is unequal). The size of the chunk starts with a minimum number, and grows by a factor until the maximum possible value is reached. If the minimum value is 400B, the maximum value is 1MB, the factor is 1.20, and the size of each slab chunk is:

slab1-400b;slab2-480b;slab3-576b. Slab the larger the chunk, the greater the gap between it and the slab ahead. Therefore, the larger the maximum value, the less memory utilization. Memcached must pre-allocate memory for each slab, so if you set a smaller factor and a larger maximum value, you will need to provide more memory for memcached.

2) do not attempt to access large data in memcached, such as putting huge web pages into mencached. Because it takes a long time to load and unpack big data into memory, the performance of the system can be poor. If you do need to store more than 1MB of data, you can modify the value of Slabs.c:power_block, and then recompile memcached, or use an inefficient malloc/free. In addition, the database, mogilefs and other alternatives can be used instead of the memcached system.

7. How does the memcached memory allocator work? Why not apply malloc/free!? Why use slabs?

In fact, this is a compile-time option. The internal slab allocator is used by default, and the built-in slab allocator should indeed be used. At the earliest, memcached only used Malloc/free to manage memory. However, this approach does not work well with the memory management of the OS before. Repeated malloc/free caused memory fragmentation, and the OS eventually spent a lot of time looking for contiguous blocks of memory to meet malloc requests, rather than running the memcached process. The slab dispenser was born to solve the problem. The memory is allocated and divided into chunks, which has been reused. Because memory is divided into slabs of different sizes, if the size of the item is not appropriate for the slab that is chosen to store it, some memory is wasted.

8. What are the restrictions on the expiration time of item memcached?

The item object can expire up to 30 days in length. Memcached the incoming Expiration time (time period) is interpreted as a point in time, once at this point in time, memcached the item to a failed state, which is a simple but obscure mechanism.

9, what is the binary protocol, do you need to pay attention?

The binary protocol attempts to provide a more efficient and reliable protocol for the end, reducing the CPU time generated by the client/server side due to processing protocols. According to Facebook's test, parsing the ASCII protocol is the most CPU-intensive time in memcached

Link.

10. How does the memcached memory allocator work? Why not apply malloc/free!? Why use slabs?

In fact, this is a compile-time option. The internal slab allocator is used by default, and the built-in slab allocator should indeed be used. At the earliest, memcached only used Malloc/free to manage memory. However, this approach does not work well with the memory management of the OS before. Repeated malloc/free caused memory fragmentation, and the OS eventually spent a lot of time looking for contiguous blocks of memory to meet malloc requests, rather than running the memcached process. The slab dispenser was born to solve the problem. The memory is allocated and divided into chunks, which has been reused. Because memory is divided into slabs of different sizes, if the size of the item is not appropriate for the slab that is chosen to store it, some memory is wasted.

11. Is the memcached atomic?

All the individual commands that are sent to the memcached are completely atomic. If you send a set command and a GET command for the same data at the same time, they do not affect each other. They will be serialized and executed successively. Even in multithreaded mode, all commands are atomic. However, the command sequence is not atomic. If you first get an item with a GET command, modify it, and then set it back to memcached, the system does not guarantee that the item is not manipulated by another process (process, not necessarily an operating system). Memcached 1.2.5 and later, the Get and CAS commands are available, and they solve the problem above. If you use the GET command to query a key, Item,memcached returns a unique identifier for the item's current value. If the client program overwrite this item and want to write it back to memcached, you can send that unique identity together with the memcached by using the CAS command. If the item's unique identity in the memcached is consistent with what you provide, the write operation will succeed. If the item is also modified by another process during this time, the unique identity of the item stored in the memcached will change, and the write operation will

Memcache Store a single key, slow performance when the amount of data is too large! And the simple memcache is not suitable for the scene.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.