The memcache function is too simple. it can only be setget and delete. it can only store key-value data and cannot store the list. of course, you can also serialize a list and save it to memcache, but there will be concurrency issues ,... the memcache function is too simple. it can only be set get and delete. it can only store key-value data and cannot store the list. of course, you can also serialize a list and save it to memcache, however, there will be a concurrency problem. data should be locked every time data is saved, inserted, or exited, which is difficult to ensure data consistency in the case of high concurrency.
However, memcache has an increment operation that adds 1 to the value corresponding to a key. In fact, it is an addition operation. the default value is 1. this operation is atomic, therefore, we can maintain an auto-increment id to ensure that the data is unique, and add two pointers to maintain the starting key value. this creates a simple but phase queue.
The code is as follows:
QueKeyPrefix = "MEMQUE _ {$ key} _"; $ this-> startKey = "MEMQUE_SK _ {$ key }"; $ this-> endKey = "MEMQUE_EK _ {$ key}";}/*** get list * first get start and end pointer, then get the data ** @ return array */public function getList () {$ startP = $ this-> memcache-> get ($ this-> startKey ); $ endP = $ this-> memcache-> get ($ this-> endKey); empty ($ startP) & $ startP = 0; empty ($ endP) & $ endP = 0; $ arr = array (); for ($ I = $ startP; $ I <$ endP; ++ $ I) {$ key = $ This-> queKeyPrefix. $ I; $ arr [] = $ this-> memcache-> get ($ key);} return $ arr;}/*** insert queue * move the pointer after the end, get an auto-increment id * and save the value to the position specified by the pointer ** @ return void */public function in ($ value) {$ index = $ this-> memcache-> increment ($ this-> endKey); $ key = $ this-> queKeyPrefix. $ index; $ this-> memcache-> set ($ key, $ value);}/*** Team-out * is very simple, after getting the start value, start the pointer and move it back * // open source code phprm.com * @ return mixed */public function out () {$ resul T = $ this-> memcache-> get ($ this-> startKey); $ this-> memcache-> increment ($ this-> startKey ); return $ result ;}?>
About memcached
The memory storage method (slab allocator) and memcached data storage method is slab allocator, that is, data sharding. when the service starts, the memory is divided into chunks of different sizes, when data comes in, it is stored in a suitable chunk. in the previous version, memory is allocated directly, leading to random memory fragmentation search and other problems.
Data expiration and deletion mechanism
Memcached does not delete data after it expires, but cannot access expired data. the space occupied by expired data will be reused.
Memcached uses lazy expiration. instead of actively scanning whether a data item has expired, memcached uses lazy expiration to determine whether the data item has expired. the deleted algorithm is LRU (Least Recently Used), which preferentially deletes less Recently Used data.
Distributed mechanism of memcached
Although memcached is a distributed cache, memcached itself does not implement any distributed mechanism. distributed functions are mainly implemented by clients.
The program adds multiple memcahced services to the client (memcache extension) through addserver. before accessing data, the client obtains the node for storing data through the hash algorithm and then accesses the data, when one of the memcached servers fails or another memcached server is added, the nodes used to store data from the hash algorithm change and access data from the new server.
Link to this article:
Add to favorites ^ please keep the tutorial address.