Memcached Study Summary

Source: Internet
Author: User
Tags memcached

A Introduced
1. Libevent-based event handling
Libevent is a package of cross-platform event-handling interfaces that are compatible with event handling including those operating systems: Windows/linux/bsd/solaris and other operating systems.
Packaged interfaces include: poll, select (Windows), Epoll (Linux), Kqueue (BSD),/dev/pool (Solaris)
Memcached uses libevent to handle network concurrent connections, maintaining a fast response capability in large concurrency scenarios.
libevent:http://www.monkey.org/~provos/libevent/

2. Internal Memory storage mode
To improve performance, the data saved in memcached is stored in Memcached's built-in memory storage space. Because the data exists only in memory, restarting the memcached and restarting the operating system will cause all data to disappear. Additionally, when the content capacity reaches the specified value, the unused cache is automatically deleted based on the LRU (Least recently used) algorithm.

Data storage mode: Slab Allocation
The structure diagram is as follows:

The basic principle of Slab allocator is to divide the allocated memory into blocks of specific lengths (chunk) according to the predetermined size, and divide the same size blocks into groups to completely resolve the memory fragmentation problem. However, due to the allocation of memory of a specific length, it is not possible to effectively utilize allocated memory. For example, to cache 100 bytes of data into a 128-byte chunk, the remaining 28 bytes are wasted.
Page: The memory space allocated to slab, which is 1MB by default. After assigning to slab, the slab is divided into chunk according to the size of the.
Chunk: The memory space used to cache records.
Slab Class: A group of chunk of a specific size.
Memcached Select the slab that best fits the data size, depending on the size of the data received.
Memcached holds a list of idle chunk within slab, selects chunk based on the list, and caches the data in it.

Data expiration mode: Lazy expiration + LRU
Lazy Expiration
The memcached internal does not monitor whether the record is out of date, but instead looks at the timestamp of the record at get, checking that the record is
Period. This technique is called lazy (lazy) expiration. As a result, memcached does not consume CPU time on outdated monitoring.
Lru
Memcached takes precedence over the space of a record that has timed out, but even so, a new record is appended with no space
The Least recently Used (LRU) mechanism is used to allocate space in this case. When memcached has insufficient memory space (when the new space cannot be obtained from the Slab Class), it searches for records that have not been used recently and allocates their space to new records.

http://www.ttlsa.com/memcache/memcached-description/

Two. Processing process

1. memcached uses event-driven + state-driven approach to the entire business process, for each TCP/UDP connection, there is a corresponding state, the status of the possible values are:

/** * Possible states of a connection.*/enumconn_states {conn_listening,/**< the socket which listens for connections*/Conn_new_cmd,/**< Prepare connection for next command*/conn_waiting,/**< waiting for a readable socket*/Conn_read,/**< Reading in a command line*/Conn_parse_cmd,/**< try to parse a command from the input buffer*/Conn_write,/**< writing out a simple response*/Conn_nread,/**< Reading in a fixed number of bytes*/Conn_swallow,/**< Swallowing Unnecessary bytes w/o storing*/conn_closing,/**< Closing this connection*/Conn_mwrite,/**< writing out many items sequentially*/conn_closed,/**< connection is closed*/conn_max_state/**< Max State value (used for assertion)*/};

2. For the client's command, the status of the connection is set to Conn_parse_cmd, so that the client's commands can be resolved, the following commands are supported:

 /** Definition of the different command opcodes. * See section 3.3 Command opcodes*/typedefenum{protocol_binary_cmd_get=0x00, Protocol_binary_cmd_set=0x01, Protocol_binary_cmd_add=0x02, Protocol_binary_cmd_replace=0x03, Protocol_binary_cmd_delete=0x04, Protocol_binary_cmd_increment=0x05, Protocol_binary_cmd_decrement=0x06, Protocol_binary_cmd_quit=0x07, Protocol_binary_cmd_flush=0x08, Protocol_binary_cmd_getq=0x09, Protocol_binary_cmd_noop=0x0a, Protocol_binary_cmd_version=0x0b, PROTOCOL_BINARY_CMD_GETK=0x0c, PROTOCOL_BINARY_CMD_GETKQ=0x0d, Protocol_binary_cmd_append=0x0e, Protocol_binary_cmd_prepend=0x0f, Protocol_binary_cmd_stat=0x10, Protocol_binary_cmd_setq=0x11, PROTOCOL_BINARY_CMD_ADDQ=0x12, Protocol_binary_cmd_replaceq=0x13, Protocol_binary_cmd_deleteq=0x14, PROTOCOL_BINARY_CMD_INCREMENTQ=0x15, PROTOCOL_BINARY_CMD_DECREMENTQ=0x16, PROTOCOL_BINARY_CMD_QUITQ=0x17, Protocol_binary_cmd_flushq=0x18, PROTOCOL_BINARY_CMD_APPENDQ=0x19, PROTOCOL_BINARY_CMD_PREPENDQ=0x1a, Protocol_binary_cmd_touch=0x1c, Protocol_binary_cmd_gat=0x1d, Protocol_binary_cmd_gatq=0x1e, Protocol_binary_cmd_gatk=0x23, PROTOCOL_BINARY_CMD_GATKQ=0x24, Protocol_binary_cmd_sasl_list_mechs=0x20, Protocol_binary_cmd_sasl_auth=0x21, Protocol_binary_cmd_sasl_step=0x22,        /*These commands is used for a range operations and exist within * This header for use with other projects.         Range Operations is * not expected to being implemented in the memcached server itself. */Protocol_binary_cmd_rget=0x30, Protocol_binary_cmd_rset=0x31, Protocol_binary_cmd_rsetq=0x32, Protocol_binary_cmd_rappend=0x33, PROTOCOL_BINARY_CMD_RAPPENDQ=0x34, Protocol_binary_cmd_rprepend=0x35, PROTOCOL_BINARY_CMD_RPREPENDQ=0x36, Protocol_binary_cmd_rdelete=0x37, Protocol_binary_cmd_rdeleteq=0x38, PROTOCOL_BINARY_CMD_RINCR=0x39, Protocol_binary_cmd_rincrq=0x3a, PROTOCOL_BINARY_CMD_RDECR=0x3b, Protocol_binary_cmd_rdecrq=0x3c        /*End Range Operations*/} Protocol_binary_command;

3.memcached Processing frame:

Three. Redis, memcached, mongoDB contrast

1) Performance
are relatively high, performance is not a bottleneck for us
In general, the TPS is about the same as Redis and Memcache, more than MongoDB

2) Ease of operation
Memcache Data Structure Single
Redis is rich, data manipulation, Redis better, less network IO times
MongoDB supports rich data expression, index, most similar relational database, support query language is very rich

3) The size of the memory space and the size of the data volume
Redis has added its own VM features after the 2.0 release, breaking the limits of physical memory; You can set the expiration time for key value (similar to memcache)
Memcache can modify the maximum available memory, using the LRU algorithm
MongoDB is suitable for large data storage, depends on operating system VM to do memory management, eat memory is also very bad, service not with other services together

4) Availability (single point of issue)
For a single point of problem,
Redis, which relies on clients for distributed reads and writes, and master-slave replication relies on the entire snapshot every time the primary node is reconnected from the node, without incremental replication, due to performance and efficiency issues,
Therefore, the single point problem is more complicated, the automatic sharding is not supported, and the dependent program is required to set the consistent hash mechanism.
An alternative is to use your own proactive replication (multiple storage) instead of Redis's own replication mechanism, or change to incremental replication (you need to implement it yourself), consistency issues and performance tradeoffs

Memcache itself has no data redundancy mechanism, it is not necessary, for fault prevention, relying on mature hash or ring algorithm to solve the single point of failure caused by the jitter problem.

MongoDB supports Master-slave,replicaset (internal using Paxos election algorithm, automatic fault recovery), auto sharding mechanism, blocking the failover and segmentation mechanism to the client.

5) Reliability (persistent)

For data persistence and data recovery:
Redis Support (snapshot, AOF): dependent on snapshots for persistence, AOF enhances reliability while impacting performance
Memcache not supported, usually used in cache, improve performance;
MongoDB supports persistent reliability from the 1.8 release with the binlog approach

6) Data Consistency (transactional support)
Memcache in concurrent scenarios, with CAS to ensure consistency
Redis transaction support is weak and can only guarantee continuous execution of each operation in a transaction
MongoDB does not support transactions

7) Data analysis
MongoDB has built-in data analysis capabilities (MapReduce), others do not support

8) Application Scenarios
Redis: More performance operations and calculations with smaller data volumes
Memcache: Used to reduce database load in dynamic system, improve performance, cache, improve performance (suitable for read and write less, for a large amount of data, you can use sharding)
MongoDB: The main solution to the massive data access efficiency problem

Http://www.blogjava.net/paulwong/archive/2013/09/06/403746.html

Memcached Study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.