Source: Linux Community Author: Ajun_studio
1. What is memcached?
Memcached is often used to speed up the processing of applications, where we will focus on best practices for deploying it in applications and environments. This includes the flexible distribution of what should or should not be stored, how to handle the data, and how to adjust the methods used to update memcached and stored data. All applications, especially many Web applications, need to optimize the speed at which they access the client and return information to the client. However, in general, the same information is returned. Loading data from a data source (database or file system) is inefficient, especially if you run the same query every time you want to access that information. If you can load this information directly from memory, it is conceivable how much faster it will be.
While many Web servers can be configured to use caching to send back information, that is not compatible with the dynamic nature of most applications. And that's where memcached comes in. It provides a common memory storage that can hold anything, including objects in the local language, which allows you to store a wide variety of information and access it from many applications and environments.
Memcached stores Key/value key-value pairs, but the values must be serializable objects (here I say Java), or json,xml,html, and so on, here to illustrate the memcached cluster, the server side does not communicate with each other, Communication is done entirely by your client, you only need to specify your key value on the client, then set, there will be a hash algorithm to determine which server you will store the key.
Finally, note that memcached is mainly used to store real-time requirements that are not very high information.
2. Using the memcached scene
Imagine there is such a scene, an e-commerce site, on the left side of the site will be the classification of goods, the middle is a list of product search results, you can view commodity information and business basic information and relevant business reputation information.
In this scenario, because the category of a mall doesn't change very often. Real-time is not very high, so it should be put in the cache.
General Time Practice:
Perform one or more SQL queries from the database for the entire station's assortment---->> recursively form your desired classification tree------>> Enter processing data------->> display to the page.
In using the memcached procedure:
When first displayed: Determine if there is a classification in the memcached cache----NOT-----> Execute one or more times SQL queries the entire station from the database-----> put into memcached------->> Go to process data------->> show to Page
The second display determines if there is a classification in the memcached cache------->>-to extract data from memcached-------->> Enter processing data------->> display to page
When the process first occurs, the data is loaded normally from the database or other data source and then stored in memcached. When this information is accessed for the next time, it is removed from the memcached, not from the database, saving time and CPU cycles.
But what if the data in the data changes to update the data in memcached?
The procedure is: Update the information classified in the database-------> Find the key value in memcached, delete the------> reinsert it into your memcached.
The storage operations within the memcached are atomic, so updates to the information do not allow the client to obtain only part of the data; they get either the old version or the new version.
3, in the use of memcached key conventions and naming specifications
Here to give you a summary:
The first: It is usually the company's project name + character constant + return the PO ID (or the unique indicator can be)
The second: You can use spring AOP to intercept the service you want to cache, through the class name + method name + parameter name, to do the key is worth the unique
Third: +id (or query criteria) with your SQL statement
The first one is more flexible and you can embed it into your service code and write a pseudo-code below:
String key = "Taobao" + "Cat" +catall
Object o = GetKey (key);
if (o==null) {
Querying your database operations
Cat C = Catservice.findall ();
Setkey (key,c);//set to memcached
Return c;//returns results
}else{
Return (Cat) c;//returns results
}
But embedded in your service layer, will destroy your service business logic, coupling high, here our boss proposed a solution, is to be in your service and action in the middle of adding a layer to do cache processing, which seems to reduce the coupling.
The second is for sub-module development, because the calls are all methods in the same class, but the interceptors also affect performance, but the development is more efficient, and there is no business logic that destroys your service.
The third person feels that it is not very good, because if the SQL statement is very long, it will take up part of the memory.
The client language includes Java, Perl, PHP, and so on, can serialize the language object in order to store in memcached, you can Google his client to do his own experiment.
4, how to have the rule elastic use memcached (multi-server use)
Ask a question, what to do when the memcached server is down.
The point is that the cache is not the only source of your information, you cannot use memcached as your database, he is just a cache, once the outage, the information is not, it is very scary. At this point you have to be sure that you can load it from somewhere else (like your MySQL database), someone might think that I can use multiple servers, copy each other's information, one is down, and the others can go on, and I think that's a bad idea, assuming you're using three servers Are all 1g of memory, you copy the information to the three platforms, but you think about it, you actually only have 1g of memory available, and you waste 2 servers, which seems to be a great price.
At this point you can solve this, or there are 3 servers, but the three server will not have the same information, that is, the information will not be copied to the other side of the server, one of the downtime, when you load the information at the time, you will query from the database, And this information will be stored on any server in the other two, so the advantage is: the same type of three servers, but you do not like the first, only have 1g of available memory, you now instead of 3g available, why not laugh, just when the time to go down the library, It will still be taken from the cache later.
Here, I think you have some understanding of memcached.
Remember memcached is not a database, he's just memory,
is not the only source of information, to assist the database operation, to improve the query speed of information.
In the development of how to specify key, this is very important, convenient for later maintenance.
And how to use your RAM more efficiently with multiple servers.
For installation of memcached see: memcached Installation Tutorial http://www.linuxidc.com/Linux/2012-03/56500.htm