The "1" memcached is a memory buffer based on the Key-value value pair, which does not use disk buffering to act as a buffer, but rather uses real physical memory.
The "2" memcached needs to specify the allocated memory size at startup. Commands such as: memcached-d-M memory Size (m)-L IP address-p port
"3" memcached is a single indexed structured data organization, where all data items are independent of each other (do not want traditional data to be relational), each data item is unique indexed by key, and do not treat the cache with relational thinking
The "4" Memcached uses the key-based hash algorithm to store data, and the query has a time complexity of O (1)
The "5" memcached uses the LRU algorithm to retire buffered data items, but also allows the user to set the cache expiration time, which depends on the test
"6" memcached uses the Libevent function library to implement the network concurrency model, and supports the cache operation in the environment of large concurrent users.
"7" memcached using object serialization technology, you can serialize objects into binary data and transfer them across the network.
The "8" memcached can be combined with the JSON format to represent in-memory objects in JSON form and then cache them as strings.
The client API of the "9" memcached can be used to save objects directly to the cache or to remove objects directly from the cache, hiding the details of the transformation
"10" One of the core questions of using memcached is: What can be cached? What is not necessary to happen in real time
"11" Do not use locks on memcached to prevent threads from competing, but instead use the atomic increment operations they provide
"12" memcached in the state parameters: Total_items, Bytes, get_hits, Bytes_read, Bytes_written, limit_maxbytes are very common reference data
In the "13" memcached state, there are space usage, cache hit rate, IO traffic
"14" memcached cache data partition, preferably not based on the business data content type, but with the business-independent partitioning algorithm, otherwise prone to load imbalance
After the "15" memcached extension, the cache data needs to be allocated on the front end via a "cache connector" based on the hashing algorithm
The "16" memcached data is theoretically derived from the underlying persistent resource, so when the cache is extended, it can be purged. The premise is that the necessary data has been persisted.
The "17" cache is not a persistent facility and does not use the persistence concept to treat data in the cache. It already has a corresponding persistent resource.
The data in the "18" cache is preferably temporary (no need to persist, but only as a process control) or there is already a corresponding persistent resource (can be rebuilt if necessary)
"19" Memcached uses the "block allocation" of the memory allocation mode, while the limit of key is 250 bytes, the value of the limit is 1M size
"20" memcached is a lazy alternative, does not open an additional process to monitor the outdated kv pair and delete in real time, but when and only if, a new inserted data, and at this time there is no extra space to put, the cleanup action.
"21" Once the data in the database is discovered, we must update the data in the cache in time to ensure that the app reads the correct data in sync. Of course, we can record the expiration time of the data in the cache by the timer, and the time will trigger the event to update the cache, but there is always a time delay, which may cause the app to read the dirty data from the cache, which is also known as the dog hole problem.
"22" When data is lost on an MS, the app can still get data from the database. However, it is more prudent to provide additional MS to support the cache when some MS does not work properly, so that it does not cause the database to get too much load from the cache without data being taken from it.
"23" with the support of the cache, we can be in the traditional application layer and the DB layer between the cache layer, each app server can be bound to a MC, each time the data can be read from MS, if not, and then read from the DB layer. And when the data to be updated, in addition to send the update of SQL to the DB layer, but also to the updated data to the MC, let MC to update the data in Ms.
"24" In order to minimize the load pressure on the database, we can deploy database replication, use the slave database to complete the read operation, and the master database will always be responsible for only three things: 1. Update the data; 2. Synchronize the slave database; 3. Update the cache.
"25" because key is now hashed on different servers, it is cumbersome to clean up a class of keys in large batches. Because the memcached itself is a large hash table, it does not have the key retrieval function. So memcached is not at all aware of how many of the keys in a certain class are stored, and what servers exist. This type of function is often used in practical applications.
Getting Started with distributed cache---Memcached