Learning the operating system, will be exposed to the cache scheduling algorithm, cache page scheduling algorithm: First allocate a certain amount of page space, the use of the page first to query whether the space has the cache of the page, if there is a direct take out, if not the first query, if the page space is not full, when the use of new pages, It frees up the old page space, caches the new page, and makes it easy to invoke the next time you use the same page.
Cache scheduling Flowchart
Caching mechanism is said above, but the implementation of the process and the elimination of the old page mechanism is different, so there will be different cache scheduling method, it is common is the FIFO,LRU,LFU cache expiration policy.
1.FIFO(first in initial out): The first-out, eliminated the most recent page, the new incoming page is eliminated at the latest, fully meet the queue.
2.LRU(Least recently used): least recently used, eliminate recently unused pages
3.LFU(Least frequently used): least recently used, eliminate the least frequently used pages
The following explains in detail how three algorithms are implemented. The following explanation turns from http://blog.csdn.net/yangpl_tale/article/details/44998423
First, FIFO
According to the "FIFO (first in,first out)" principle of the elimination of data, just in line with the characteristics of the queue, data structures using queue queues to achieve.
Such as:
1. The newly accessed data is inserted into the tail of the FIFO queue and the data is moved sequentially in the FIFO queue;
2. Elimination of FIFO queue header data;
Second, LRU
(Least recently used, least recently used) algorithms retire data based on historical access records of data, with the core idea that "if the data has been accessed recently, the chances of future access are higher".
The most common implementation is to use a linked list to save the cached data, the detailed algorithm is implemented as follows:
1. Inserting new data into the list head;
2. Whenever the cache hits (that is, the cached data is accessed), the data is moved to the list header;
3. When the list is full, discard the data at the end of the list.
Third, LFU
The Least frequently used algorithm eliminates data based on the historical frequency of data access, and its core idea is that "if the data has been accessed many times in the past, it will be accessed more frequently in the future."
Each block of Lfu has a reference count, and all data blocks are sorted by reference count, and chunks with the same reference count are sorted by time.
The specific implementation is as follows:
1. Insert the new data into the tail of the queue (because the reference count is 1);
2. When the data in the queue is accessed, the reference count increases and the queue is reordered;
3. When you need to retire data, delete the last chunk of the sorted list.
The first two kinds of algorithm implementation is not difficult, LFU implementation will be a bit troublesome, the following is my own consideration of the elimination of implementation ideas, the page will occur when the following circumstances
1. Determine if the old page exists, no, the tail element is eliminated, adding new elements.
2. Existing pages exist, query the location of the old page, and then move the elements of the old page up to the previous element of the same number of pages.
For example, the element (a,1) represents the page A, the number of calls is 1, then in the cache is set to {(f,4), (e,2), (d,2), (c,2), (b,2), (a,1)}, this time call C page, only need to use the binary query to find the location of C elements, c elements stored, The elements from the C element to the E element (without the C element) are all moved back one bit, and the C element is placed in the position of the original E element.
Three algorithms are attached below:
Detailed three cache expiration policy LFU,FIFO,LRU (with implementation code included)