First, preface
Read the excellent source code, the promotion of their own is very fast, whether it is to consider the angle of the problem, or coding ability.
With the problem read the source, learning more efficient, you can temporarily set a few small problems, with problems, to think why the author, whether there is a replacement plan?
1). What kind of data structure is used for caching, is it convenient?
2). What is a caching policy?
3). Is the cache pool size considered? What is the update strategy beyond the limited size?
4). Is thread-safe?
Second, reading comprehension
First, the cache file structure is as follows:
Yycache is an external use interface, there are three kinds of caching methods: Memory cache, hard disk cache, database cache;
The memory cache uses the data structure of the doubly linked list, and the node record information is: pointer to the previous and next node, the key,value of node itself and its size, time, defined as follows:
@interface _yylinkedmapnode:nsobject { @package __unsafe_unretained _yylinkedmapnode *_prev;//retained by DiC __unsafe_unretained _yylinkedmapnode *_next;//retained by DIC ID _key; ID _value; Nsuinteger _cost; Nstimeinterval _time;} @end
Why do you define it as a doubly linked list? The only advantage of a doubly linked list is that it can be located to the parent node, so there must be a lot of Sao operations to locate its parent node.
_key,_value are general requirements, _cost from the literal understanding should be the memory size, _time from the literal understanding should be recorded a time, this time can only be the operation time, what use, can not be seen here, but from the regular routines of the cache to guess, _ Cost is to control the cache pool size, _time is to control the cache cleanup work from a temporal angle, for example, a cache that is more than 5 days long is considered invalid.
Linkedmap is defined as follows:
/** A linked map used by Yymemorycache. It ' s not thread-safe and does not validate the parameters. Typically, you should don't use this class directly. */@interface _yylinkedmap:nsobject { @package cfmutabledictionaryref _dic;//Do not set object directly NS UInteger _totalcost; Nsuinteger _totalcount; _yylinkedmapnode *_head; MRU, don't change it directly _yylinkedmapnode *_tail;//LRU, does not change it directly BOOL _releaseonmainth Read; BOOL _releaseasynchronously;}
Linkedmap is the two-way linked list of additions and deletions to check the operation, there is not much to say.
Yymembercache: The class of the memory cache.
Its code from the number of bars, time, memory size three layers of cache update control.
It's just a start. Loop Call grooming function:
One point to note here is that the dispatch_after is inserted into the block behind the queue, delayed execution, there is no way to cancel, so you must determine in the block whether self still exists.
The rest is the normal operation, the author expects to 5s self-examination, respectively, from the memory, the number of cache, the expiration time of three dimensions to clean out the non-compliant cache.
-(void) _trimrecursively { __weak typeof (self) _self = self; Dispatch_after (Dispatch_time (Dispatch_time_now, (int64_t) (_autotriminterval * nsec_per_sec)), Dispatch_get_global_ Queue (Dispatch_queue_priority_low, 0), ^{ __strong typeof (_self) self = _self; if (!self) return; [Self _triminbackground]; [Self _trimrecursively];} ); -(void) _triminbackground { Dispatch_async (_queue, ^{ [self _trimtocost:self->_costlimit]; [Self _trimtocount:self->_countlimit]; [Self _trimtoage:self->_agelimit]; });}
The next step is to get the cache and cache two core actions, the method is defined as follows:
-(ID) Objectforkey: (ID) key-(void) SetObject: (ID) object forkey: (ID) key withcost: (Nsuinteger) cost
Read through the code and the Caching policy is:
Each read and write operation will update the corresponding cache node to the list header, and update its time (according to the cache update logic, delete is definitely not often used cache, often active cache, as far as possible to retain, so is the tail delete, the first insert, here also explains why the need for a doubly linked list structure, is the convenient tail to delete and update node to the list header).
The code is thread-safe to ensure that only one operation is running at the same time through a mutex pthread_mutex.
* * PostScript
The cache logic actually analyzes here is basically finished, the rest is to understand the function of the specific code, it is interesting that the author in the memory cache is pthread_mutex to ensure thread safety, and in the hard disk cache is the semaphore dispatch_semaphore_t its performance, How to play, you can come here
The author of the idea is still more rigorous, if you want to say a point can be optimized, is not a hard disk cache, write the operation of the file in the app will go to the background of the time node to do, minimize IO operations.
Brief analysis of Yycache