Brief discussion on simple properties of memcached server

Source: Internet
Author: User

Memcached's memory algorithm:

1. The traditional way of memory management is to use the memory allocated through malloc to reclaim memory through free, which is prone to memory fragmentation and reduce the operating system's memory management efficiency.
2. Memcached uses the slab allocation mechanism to allocate and manage memory, which divides the allocated memory into chunks of memory of a certain length according to a predetermined size, then divides the same memory blocks into groups, and the data is stored, matching the slab size according to the size of the key value, Find the nearest slab storage, there is also a space waste phenomenon.

Caching of memcached servers:

1. Memcached's caching strategy is the LRU (least recently used) plus expiry expiration policy, when allocated to memcached memory space is exhausted, the failed data is replaced first, and then replaced with the most recently unused data. (In the LRU algorithm, memcached uses a lazy expiration policy that does not monitor whether the key/vlue is expired, but instead checks the timestamp of the record when it gets the key value, checking to see if Key/value is out of space. This reduces the load on the server. )
2. When you store data items in a memcached, you may specify that it will expire at cache time, and default to permanent.
3. All data is stored in memory, the access data is faster than the hard disk, when the memory is full, the LRU algorithm automatically deletes the unused cache, but does not consider the data disaster tolerance problem.
4. Restart the memcached service, restart the machine where the memcached service is located, and all data will disappear
5. Client destroys memory data through Delete/flush


Memcached's distributed algorithm:

1. Each memcached server does not communicate with each other, independently accesses data and does not share any information. The server does not have distributed functionality, and distributed deployment depends on the Memcache client. When Key/value is deposited/removed to the memcached cluster, the memcached client program calculates which server to store on a certain algorithm, and then stores the Key/value value to this server. In other words, access to data in two steps, the first step, select the server, the second step to access data.
2. There are two options for choosing a server algorithm, one is to calculate the distribution based on the remainder, and the other is to calculate the distribution based on the hash algorithm.
2.1 Remainder calculation: First, the integer hash value of the key is obtained, divided by the number of servers, according to the remainder to determine the access server, this method is simple and efficient, but when the memcached server increases or decreases, almost all of the cache will be invalidated.
2.2 Hashing algorithm: First calculates the hash value of the memcached server and distributes it to the 32-square circle of 0 to 2, then calculates the hash value of the key that stores the data in the same way and maps it to the circle, and finally starts from the position where the data is mapped, and saves the data to the first server found. If more than 2 of the 32 parties are still unable to find the server, the data is saved to the first memcached server. If you add a memcached server, the keys on the first server that only increase the counter-clockwise direction of the server on the circle will be affected.

Starting instructions: memcached–p 11211–d–u root–p/tmp/memcached.pid

-P is the use of TCP, the default port is 11211
-D means a daemon is started in the background (daemon)
-U means that the specified root user is started and cannot be started by default with root
-P indicates the PID storage location of the process, where "P" is capitalized "P"
-L followed by IP address, manually specify the listening IP address, the default all IP is listening
-M followed by the allocated memory size, in megabytes, defaults to 64M
-C Maximum number of running concurrent connections, default is 1024
-F Block size growth factor, default is 1.25
-M run out of memory and return an error instead of deleting an item, that is, without the LRU algorithm

Understand the principle of memcache running on the server, in research and development can avoid some simple errors, such as how to design multi-memcache, how to extract key value data, data life cycle and so on.

Brief discussion on simple properties of memcached server

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.