Overview of cache policy design for large Web Sites

Source: Internet
Author: User
In the previous article, I made a brief design of the crazy code cache configuration, which may be a bit vague. Some friends have raised several questions and discussed them. It is necessary to clarify one question first, unlike Common cache policies, our cache policies focus on updating policies rather than read-only policies. The implementation of read-only cache and common cache policies is not difficult. What we need to solve is non-common cache, concurrent update cache, Scalable Cache, and distributed cache update operations, we can easily implement common things without having to perform too many operations. Imagine a problem. For a multi-user concurrent system, if every user maintains a cache policy and ensures the timeliness of updates and the necessity of processing, it is hard to think of an effective processor to maintain the cached copies of each copy (per user, the storage nature of the cache also determines the difficulty of distributed cache policy processing and the difficulty of distributed communication updates, it is also difficult to try to achieve an effective cache hit rate for some pages with few accesses and common features, such as the blog of a certain user. Briefly summarizes the key points of the discussion on Cache policies. A. Cache Policy Based on massive non-common data B. Cache policies for concurrent updates based on data cache levels C. Cache Policy Based on Concurrent Data Storage D. Distributed cache policies E . Search-based Cache PolicyWe will not talk about static pages or similar issues here. Static pages are very suitable for implementation in the early stages of the system, where the user base is not very large, when clusters are involved, the static implementation cost, IO cost, maintenance cost, expansion cost, and update cost will far exceed the cache policy cost, of course, we will also have a set of static processing solutions based on the cache, which will be discussed later. Our goal is to establish a scalable and easy-to-maintain Scalable Cache Policy. The specific problems are analyzed below. For question:A common Blog system is the best example. each user's homepage is a relatively personalized data with few commonalities. for common solutions, we may need to maintain the cached copies accessed by each user. For some blog sites with low access traffic, this method will undoubtedly cause a huge waste. For a large number of non-common data caches, there are several solutions: 1) Quantify the cache target and assign the corresponding cache weight. (Weight classification)The purpose is simple. Only valid data is cached. First, extract active users and high-traffic users, and cache data in groups and shards (this is called the beauty effect for the dating SNS system) 2) Non-persistent cache persistence (temporary persistence)Cherish and effectively use data queries to persistently store data that are not hit by the cache or that have no right value (serialized and stored for static storage ), when the cache is not hit, persistent data instead of data query is obtained first. It can be understood as temporary data storage or temporary storage at a location of the sub-server. 3) Data Update-based cache cleanup (one-time use)When the persistent cache remains invalid (dependent data is modified), temporary data is directly deleted (the cache is activated and stored only during access. Once modified or expired, we immediately discard it ). 4) Cache update proxy rules
Other threads are used for maintenance and thread validity. master programs are separated to the maximum extent to clear invalid cache and temporary persistent cache data. For question B:In small cache policies, cache processing is unique, operable, and non-physical storage for each request of the entire application. In the process of concurrent updates, a small number of concurrent updates will be very realistic to clear all the cache pools, resulting in a low cache hit rate and a high initialization rate, which cannot play the role of the cache policy. In this case, the processing scheme is the same as that mentioned in A.4, which is handled by an independent cache update process, all requests involving cache updates in an application are executed by a dedicated update proxy. This solution is relatively simple and will not be repeated. For question C:As mentioned in the previous article, the problems caused by concurrent data updates are the I/O responses, timeouts, deadlocks, and thread blocking of databases. We use a write cache to handle this scheme. In fact, this is not a traditional read cache. Let's name it a write cache. We can interpret it as a problem similar to the hard disk buffer. Here, the operations are handled a little more, and the read-only cache update issue is also involved. According to different systems, we need to analyze and process different angles. We take common webgame as an example to briefly introduce the processing mechanism. Here there are two common cases: 1) for webgame end-user players, the data of each online user is non-common (problem A), while in A combat scenario, each group of data is changing at all times, if we store data changes in the form of Database logs, it is obvious that the Database is under great pressure. What we need to record is only the results of the battle, there is no need to save the battle process. In this case, we use the write cache to perform corresponding data operations. This process is very simple. It can be solved in the form of server variables. 2) for the server role of webgame, if there are a large number of users in the combat scenario and data updates are very large, the processing in method 1 may not be sufficient, at this time, we can further abstract the cache, and maintain a unique cache object within a certain period of time (such as 3 minutes, all data operations are recorded and updated by the cache process during this period. Another process is used for asynchronous and scheduled data storage. For question DThis is a common distributed cache server group, and the problem to be solved for the cache server is the problem of inter-server communication and data consistency. We have four processing rules: 1) data cache should be effectively grouped and indexed
The goal is to minimize data coupling and even eliminate coupling. For example, the data cache distribution is separated by user IDs or the cache distribution is separated by document classification. 2) the data cache should be effectively updated.
If the data is effectively grouped, this is the solution of issue C.2. Unlike C.2, because the cache group may not be in a group of servers, it may involve the delay of data communication between the cache and the database. In this case, to ensure that the cache server is passed to Databse in real time, another cache detection process is required to complete this task (data integrity check and backup of data in two cache segments). 3) data integrity between cache servers
For data that cannot be grouped, such as user authentication data and data within a time period, we need to ensure that the two groups of data are synchronized. The best solution is to clear the corresponding cache segments, initialize the connection between cache servers next time.
This depends on the physical line. If the cache server is located in the north of the sky, we also need a queue process for synchronization and data correction, which we call cache routing. For problem e, in the case of distributed cache, multi-condition search often involves multiple cache servers. I have not yet developed a complete solution. I am using the perfunctory and integration principles. Perfunctory principles:For search-type data, it is not very important in many cases. Our search results can be provided to users later, and the data to be searched can be delayed for 10 minutes or longer. Integration principlesIntegrate search fields and tables, and share the load with an independent read-only query server to the http://www.daxi8.cn/index.php/archives/153/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.