Memcached Working principle

Source: Internet
Author: User
Tags greatest common divisor memcached

http://hzp.iteye.com/blog/1872664

Memcached processes atoms are each (key,value) pair (hereinafter referred to as KV pair), key will be converted to Hash-key by a hash algorithm, easy to find, compare and do as much as possible hash. At the same time, Memcached uses a two-level hash, which is maintained by a large hash table.

Memcached has two core components: the service side (MS) and the client (MC), in a memcached query, the MC first calculates the key's hash value to determine the MS position of the kv pair. When MS is determined, the client sends a query request to the corresponding MS, allowing it to find the exact data. Because there is no interaction and multicast protocol, the impact of memcached interaction on the network is minimized.

For example: Consider the following scenario, with three MC X, Y, Z, and three Ms A,b,c, respectively:

Set KV pair
x want to set key= "foo", value= "Seattle"
X Get the MS list, and the key to do a hash conversion, according to the hash value to determine the location of the MS stored in KV
B is selected.
X connect on b,b received request, put (key= "foo", value= "Seattle") saved up

Get KV Pair
Z want the value of key= "foo"
Z calculates the hash value with the same hash algorithm and determines that the value of key= "foo" exists on B
Z Connect on B, and get Value= "Seattle" from B.
Any other request for the value of key= "foo" from X, Y, Z will be sent to B

memcached Server (MS)

Memory allocation

By default, MS is allocated memory with a built-in component called the "block allocator". Discard the C + + standard Malloc/free memory allocations, while the main purpose of the block allocator is to avoid memory fragmentation, otherwise the operating system takes more time to find these logically contiguous blocks of memory (actually disconnected). Using a block allocator, MS will take turns allocating large chunks of memory and reusing them continuously. Of course, because of the size of the blocks are different, when the size of the data and block size does not match the case, it is possible to cause memory waste.

At the same time, MS to key and data have the corresponding restrictions, the length of the key can not exceed 250 bytes, data can not exceed the block size limit---1MB.
Because the hash algorithm used by MC does not take into account the memory size of each Ms. In theory, the MC assigns the probability of the equivalent kv pair to each MS, so that if each MS memory is not the same, that could lead to a decrease in memory utilization. So an alternative solution would be to find their greatest common divisor based on the memory size of each MS, then open n capacity = Greatest common divisor instance on each MS, which would be equivalent to having multiple sub-MS with the same capacity, providing overall memory utilization.

Caching policies

When the MS Hash table is full, the new insert data replaces the old data, and the updated strategy is the LRU (least recently used) and the effective time limit for each kv pair. The KV-to-store effective time limit is set in the MC driven by app and passed as a parameter to Ms.

While Ms Adoption is a lazy alternative, MS does not open an additional process to monitor the outdated kv pairs and delete them in real time, but only if and when the new data is inserted, and there is no extra space left to remove the action.

Memcached Client (MC)

Memcached clients are available in a variety of languages, including java,c,php,.net and more.
You can choose the right client to integrate according to the needs of your project.

Cached Web Application Architecture
With the support of caching, we can add the cache layer between the traditional app layer and the DB layer, each app server can bind a MC, each time the data can be read from MS, if not, then read from the DB layer. And when the data to be updated, in addition to send the update of SQL to the DB layer, but also to the updated data to the MC, let MC to update the data in Ms.

Assuming that our database can communicate with MS in the future, the updated tasks can be delivered uniformly to the DB layer, and each time the database updates the data, it will automatically update the data in MS so that the logic complexity of the app layer can be further reduced. Such as:

But every time we don't read the data from the cache, we have to bother the database. In order to minimize the load pressure on the database, we can deploy database replication, use the slave database to complete the read operation, and the master database will always be responsible for only three things: 1. Update the data; 2. Synchronize the slave database; 3. Update the cache. Such as:

These cached web architectures are proven to be effective in real-world applications and can significantly reduce the load on the database while improving the performance of the Web. Of course, these architectures can also be adapted to the specific application environment to achieve optimal performance under different hardware conditions.

Memcached Working principle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.