Problems
For highly concurrent and highly accessible web applicationsProgramThe database access bottleneck has always been a headache. Especially when your program architecture is still based on the single database mode and the connection peak of a data pool has reached 500, your program running is not far from the edge of the crash. At the beginning, developers of many small websites focused on the design of product requirements and ignored the overall performance and scalability of the program, you can suddenly find that one day the website crashed due to a large traffic volume, and it was too late to cry. Therefore, we must plan ahead. Before the database strikes, we should try our best to reduce the burden on it.Article.
As we all know, when a request comes over, the web server is handed over to the app server. The app processes and accesses the relevant data from the DB, but the DB access cost is quite high. In particular, when the same data is obtained every time, it is useless for the database to work at a high cost every time. If the database can speak, it will certainly complain. You have asked so many times, can't you remember it? Yes. If the app obtains the first data and saves it To the memory, it will directly read the data from the memory next time, instead of having to bother with the database. Will this reduce the burden on the database? In addition, retrieving data from the memory is much faster than getting data from the database media, which improves the performance of applications.
Therefore, we can add a cache layer between the Web/APP layer and the DB layer for the following purposes: 1. Reduce the database read burden; 2. Improve the data read speed. In addition, the cache access media is memory, while the memory capacity of a server is generally limited, unlike the hard disk capacity, which can be TB-level. Therefore, you can consider using a distributed cache layer, which makes it easier to break the limits of memory capacity and increases flexibility.
Introduction to memcached
Memcached is an open-source distributed cache system. Many large Web applications, such as Facebook, YouTube, Wikipedia, and Yahoo, are using memcached to support hundreds of millions of pages each day. By integrating the cache layer with their Web architecture, their applications increase performance while greatly reducing the load on the database.
For details about memcached, refer to its official website [1. Here I will briefly introduce the working principle of memcached:
Memcached processes each (Key, value) pair (hereinafter referred to as a KV pair), and the key uses a hashAlgorithmConvert to hash-key for search, comparison, and hash as much as possible. At the same time, memcached uses a level-2 hash table for maintenance.
Memcached consists of two core components: the server (MS) and the client (MC), in a memcached query, MC first calculates the hash value of the key to determine the location of the kV pair in ms. After MS is determined, the client sends a query request to the corresponding MS to find the exact data. Because there is no interaction or multicast protocol between them, the impact of memcached interaction on the network is minimized.
For example: In the following scenario, three Mc types are X, Y, Z, and three Ms types are A, B, and C:
Set kV Pairs
X want to set key = "foo", value = "Seattle"
X obtains the MS list and performs hash conversion on the key. The key-value pairs are determined based on the hash value.
B is selected
X connects to B, B receives the request and saves (Key = "foo", value = "Seattle ")
Obtain kV Pairs
Z wants the value of key = "foo"
Z calculates the hash value using the same hash algorithm, and determines that the key = "foo" value exists on B.
Z connects to B and obtains value = "Seattle" from B"
Any other request from X, Y, Z to get the key = "foo" value will be sent to B
Memcached server (MS)
Memory Allocation
By default, Ms uses a built-in component called "Block distributor" to allocate memory. C ++ standard malloc/free memory allocation is discarded, and the main purpose of using a block distributor is to avoid memory fragmentation, otherwise, the operating system will spend more time searching for these logical contiguous memory blocks (actually disconnected ). With the block distributor, Ms will allocate large blocks of memory in turn and reuse them continuously. Of course, because the block size is different, when the data size and the block size are not consistent, it may still cause a waste of memory.
At the same time, Ms imposes limits on both the key and data. The key length cannot exceed 250 bytes, and the data cannot exceed the block size limit-1 MB.
Because the hash algorithm used by MC does not take into account the memory size of each Ms. Theoretically, MC allocates equivalent kV pairs to each Ms. If the memory size of each MS is not the same, the memory usage may decrease. Therefore, an alternative solution is to find out their maximum public approx. Based on the memory size of each ms, then enable n instances with capacity = maximum public approx, this is equivalent to having multiple sub-MS with the same capacity to provide the overall memory usage.
Cache Policy
When the hash table of MS is full, the new inserted data replaces the old data. The updated policy is LRU (least recently used) and the validity period of each kV pair. The key-Value Pair storage validity period is set by the app on the MC side and passed to MS as a parameter.
At the same time, Ms uses the lazy replacement method. Ms does not enable additional processes to monitor and delete outdated kV pairs in real time. Instead, if and only, a new inserted data is generated, at this time, there is no extra space to clear.
Cache Database Query
Currently, one of the most popular usage modes of memcached is cache database query. The following is a simple example:
The app needs to obtain the user information of userid = xxx. The corresponding query statement is similar:
"Select * from users where userid = xxx"
The app first asks the cache whether there is data with "User: userid" (The key definition can be pre-defined and constrained). If so, the APP returns data. If not, the app reads data from the database, and call the Add function of the cache to add data to the cache.
When the retrieved data needs to be updated, the app will call the cache update function to synchronize the data between the database and the cache.
From the above example, we can also find that once the database data finds changes, we must update the data in the cache in time to ensure that the app reads the correct synchronized data. Of course, we can record the expiration time of the data in the cache through the timer. Once the time passes, the event will be triggered to update the cache, but there will always be a delay in time, as a result, the app may read dirty data from the cache, which is also known as a dog hole problem. (I will discuss this issue in the future)
Data redundancy and fault prevention
From the design point of view, memcached does not involve data redundancy. It is itself a large-scale high-performance cache layer. Adding data redundancy can only bring about design complexity and increase system spending.
When data is lost in one ms, the app can still retrieve data from the database. However, it is more cautious to provide additional ms to support cache when some MS cannot work normally, in this way, the database will not be overloaded because the app cannot retrieve data from the cache.
To reduce the impact of a Ms fault, you can use the "hot backup" solution to replace the problematic ms with a new Ms, of course, the new Ms still needs to use the IP address of the original ms, and the data will be reloaded once.
Another way is to increase the number of nodes in ms. Then, MC detects the status of each node in real time. If a node does not respond for a long time, it will be deleted from the list of available servers of MC, and the server node will be re-Hash located. Of course, this will also cause the problem that the original key is stored on B and changed to stored on C. Therefore, this solution also has its own weaknesses. It is best to use it in conjunction with the "hot backup" solution to minimize the impact of the fault.
Memcached client (MC)
The memcached client is available in various languages, including Java, C, PHP, And. net. For details, see memcached API page [2].
You can select a suitable client for integration based on your project needs.
Cache-based Web application architecture
With cache support, we can add a cache layer between the traditional app layer and the DB layer. Each app server can bind a MC, each data read can be obtained from Ms. If no data is read from the DB layer. When data is updated, in addition to sending the update SQL to the DB layer, it also needs to send the updated data to MC so that MC can update the data in ms.
Assuming that our database can communicate with ms in the future, we can hand over the updated task to the DB layer. Each time the database updates the data, it will automatically update the data in ms, in this way, the logic complexity of the app layer can be further reduced. For example:
However, every time we do not read data from the cache, We have to bother the database. To minimize the load pressure on the database, we can deploy the database rewrite and use the slave database to perform read operations. The master database is always responsible for three tasks: 1. update data; 2. synchronize the slave database; 3. update cache. For example:
These cache-based Web architectures have proved to be effective in practical applications, greatly reducing database load and improving web running performance. Of course, these architectures can also be modified based on the specific application environment to achieve optimal performance under different hardware conditions.
Vision of the future
The emergence of memcached can be said to be revolutionary. For the first time, we realized that we could use memory as a storage medium to cache data on a large scale to improve program performance. However, after all, it is quite new and requires many improvements, such:
How to Use memcached to implement a cache database so that the database runs in the memory. In this regard, memcached_engine [3] developed by tangent software has done a lot of work, but the current version is only in the lab stage.
How to easily and effectively clear keys in batches. Because keys are hashed on different servers, It is very troublesome to clear a type of keys in large batches. Because memcached itself is a large hash table, it does not have the key retrieval function. Therefore, memcached does not know how many keys are stored in a certain type of key and which servers are stored. Such functions are often used in practical applications.
Communication
I am also new to memcached, so strictly speaking, I am still a newbie. I have made a rough discussion in my class. If something is wrong, I would like to ask you to correct me more. Of course, if you have any questions or suggestions related to memcached, contact me.
Email: rongwei.yang@dianping.com
Reference
[1]. memcached Website: http://danga.com/memcached/
[2]. memcached API page: http://danga.com/memcached/apis.bml
[3]. memcached_engine: http://tangent.org/506/memcache_engine.html
Source: http://it.dianping.com/use-memcached-to-build-high-performance-web-application.htm