Memory management and deletion mechanism of memcached

Source: Internet
Author: User
Tags crc32 memcached

Introduction to memory management and removal mechanism of memcached

Note: Memcache the largest value can only be 1M space, more than 1M of data can not be saved (modify Memcache source code).

?

Note: Memory fragmentation is always present, but only one way to minimize memory fragmentation.

?

1. What is memory fragmentation?

In the use of this memory cache system, due to the continuous application, release, will form some small memory fragments, can not be exploited, this phenomenon is called, memory fragmentation. This small piece is the space that the operating system cannot use.

???? Note: Memory fragmentation is always present and cannot be eliminated, but the best algorithms can be used to minimize them.

Note: disks are also fragmented

?

2. How to solve?

Memcache is managed using slab allocator ( each slab class size is 1M)

The smallest unit is called Chunk(Small block): The warehouse where the data is stored

Multiple small units make up a chunks: multiple small blocks (all of the small pieces are the same size)

The size of each slab class is 1M

Note: The maximum value for a single chunk is 1M, that is, Memcache is a maximum of 1M

?

3. Memcache How to choose the right size?

Note: If the 122Bytes slab is full, now there is a 100Bytes of data to go, where to save?

A: Surely there will not be 144, or there are 122 of this chunk inside, using the LRU algorithm to achieve data storage.

LRU algorithm: Least recently used principle. In recent times, which data has been basically completed without being used, the data will be erased. Then put the new data inside.

?

However, fixed chunk brings the memory waste, as follows 22B

?

4. Factor tuning

Memcached in the boot, will be a certain size to organize the slab class, you can use-F to specify

The default is 1.3, and the ratio between adjacent chunk is the increase factor. You can adjust the size of the cache factor based on your website's business .

Because each business is different, the minimum chunk required is not the same. This parameter makes our system more adaptable to our business because the data can be set to its own size.

?

5. Memcache'sLazy Delete

Memcached internally does not monitor whether the record is out of date, but instead looks at the timestamp of the record at get and checks whether the record is out of date. This behavior is called lazy (lazy) expiration. Therefore, the benefit is that memcached does not consume CPU time on outdated monitoring.

For example: There is set (name, Asion, 0, 3600) after 3,600 seconds of failure, after failure, and will not be automatically deleted, only when the get query, the detection is expired, if expired delete, populate the new data

?

6. Memcache's LRU algorithm

Memcached takes precedence over the space of a record that has timed out, but even so, there is a lack of space when appending a new record, and a space is allocated using the Least recently Used (LRU) mechanism.

As the name implies, this is the mechanism for deleting "least recently used" records. Therefore, when the memory space of the memcached is low (when the new space cannot be obtained from the Slab Class), it is searched from records that have not been used recently and is empty

Assigned to a new record. From a practical point of view of caching, the model is ideal.

?

When the data space inside the Mecache (the default is 64M) is already full, then continue to store the data can be stored?

A: Can be stored, to delete outdated data, if not expired, delete the most inactive data, make room for the back to add data.

For example: In 122Bytes Slab example, when the data filled, if a 100Bytes data, how to deal with it?

Analysis: The management of the memory LRU algorithm , FIFO algorithm (basic use)

?

7. Some parameters of Memcache

Note: If you enter Ctrl+s under VIM, you can use Ctrl+q to exit

?

-P Listening Port

-L connected IP address, default is native

-D Start memcached service

-D Restart Restart memcached service

-D Stop|shutdown Close the running memcached service

-D Install memcached service

-d Uninstall Uninstall memcached service

-U Run as (only valid when running as root)

-m maximum memory usage, in megabytes. Default 64MB

Note: If the system is 32-bit, the maximum limit is 2G, and if the system is 64-bit, there is no limit.

-M running out of memory and returning an error instead of deleting an item

-c Maximum number of simultaneous connections, default is 1024

-F Block size growth factor, default is 3

-N Minimum allocated space, key+value+flags default is 48

-H Display Help

-V Output warnings and error messages

-VV request and return information for the print client

-I print copyright information for memcached and libevent

Advanced Features

Distributed Memcache Configuration

What is distributed?

A: Due to the limited capacity of a single memcache (the memory size of a single server is limited ), multiple memcache can be used to provide caching capabilities, which is called the Memcache distributed cache system.

?

How is it implemented?

A: In the client implementation of the distributed (that is, in the PHP terminal implementation of distributed storage, PHP program determines the data to save a large number of distributed servers in one of the), before the data saved, according to a certain algorithm , the data saved to the Memcache server, When fetching data, follow the same algorithm as before to get the data on the corresponding Memcache server

?

Distributed algorithms

  1. algorithm for fetching and touching

    Modulo the value of the key to the number of servers, and then save the corresponding value value to the corresponding remainder of the memcache server, generally this hash function CRC32 (key) % 3

    CRC32 () This function can make a string into a 32-bit integer

    ?

    Cons: When a server goes down or needs to add a server, this time the cache data basically all fail, because the divisor has changed. Not strict formula, hit rate = Fetch to Data/Total 1/n N represents the number of servers

    ?

    The problem is raised: when the memcache down, the cache data fails, and this time the MySQL pressure will surge,

    At this time, MySQL will be down, and then restart Mysql,mysql will be down again in a short period of time, then, a little delay (the cache has been re-established a part ), and downtime. As time went on, MySQL basically stabilized and the cache system was built successfully.

    Because the cache data does not exist, all requests are turned to MySQL to provide, this phenomenon is called memcache avalanche phenomenon .

    ?

    Overview:

    ?

    Real:

  2. Using Distributed cache

  3. From distributed access

  4. Effect

    ?

    ?

  5. Consistent hashing for distributed

  6. Set a 0-2^32-square ring
  7. Map the IP of the server through some hash function (CRC32) to the point on the ring (the location of the server)
  8. The Data key is also installed hash function operation, starting from start 0, according to the circle clockwise rotation, the corresponding value is saved to a server location that is not smaller than its own.

    Benefit: When a server goes down, the data is minimal and affects only the data on the current server.

    Cache Avalanche Phenomenon

    Because of a memcache node cache data failure, resulting in the other Memcache node cache hit rate drops, the missing data in the cache will go to the MySQL database query, short period of time, causing the MySQL server pressure is huge, resulting in downtime, called the cache avalanche phenomenon .

    ?

    Why is the avalanche caused?

  9. Due to improper algorithm, fetch algorithm, resulting in a large number of cache failure, will cause avalanche

    Solution: Consistent hash algorithm

    ?

  10. Cache time is the same time, the cache system will all expire at the same time, this will also cause an avalanche

    Solution: Cache time is set to a range of random times (3-9 hours)

    Memcache How to do high availability

  11. Using the repcached implementation, the full name replication cached is a memcached high-availability technology invented by the Japanese, referred to as replication buffer technology.

    ?

  12. memcachedb is a distributed, key-value form of Persistent Storage System . developed by SINA personnel . It is not a cache component, but a reliable, fast, persistent storage engine based on object access. The protocol is consistent with memcache (incomplete), so many memcached clients can connect to it. Memcachedb uses Berkeley DB as a durable storage component, so many of Berkeley DB's features are supported.

    Extensions

  13. How do I add an extension to a PHP under Linux and say generic steps (Redis)?

    For:

    1. Download the corresponding extension source
    2. Upload into/usr/local/src/
    3. Unzip and go inside the folder
    4. Execute the phpize /usr/local/php/under the absolute path within the folder Bin/phpize
    5. Execute configure./configure--with-php-config=/usr/local/php/bin/php-config
    6. Make && make install
    7. Generate a directory file with a. So end file under the file
    8. Modify the php.ini file
    9. Add extension_dir = Directory extension=. So file
    10. Restart Apache
    11. Add Phpinfo ()
    12. Browser view

    ?

  14. How is the security of Memcache resolved?

    For:

    Because Memcache's own design is extremely concise, there is no restrictions on the setting of permissions. Why not set permissions? Caching is only provided, in order to streamline

  15. Intranet 192.168.1.110 intranet IP outside network unreachable
  16. Write a firewall authentication rule , only allow yourself to specify that IP packets can be transferred in, all other discarded

    ?

    3. What if there are too many files when saving the session file with the file ?

    In general, when more than 65,535 session files, this time the session will become unusually slow access, meaning that PHP code execution is slow, how to solve?

    For:

    Layered Processing: A folder that starts with a-Z in a folder and then a_z in the build

    Use memcache processing: a single memcache processing capacity is limited, using distributed memcache to handle

    ?

Memory management and deletion mechanism of memcached

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.