Background:
1:.net Core has not system.web, also wood has httpruntime.cache, therefore, the space under the Cache also Wood has.
2:.net Core has a new memory cache provided, but I looked at it and did not support file cache dependencies.
Therefore, under this premise, it is expected that when the. NET core comes out 2.0 next year, it may not support file cache dependencies, so it is necessary to prepare for implementation in advance.
Before writing this article, I swept the garden about the custom cache class related articles.
Found a lot of custom cache class articles are simply stuck in the dictionary additions and deletions to change.
Therefore, decided to supplement this complete idea.
Below, we introduce the implementation process and principle of this cache class.
The core idea of implementing a cached class is introduced:
1: Use static dictionary<string,object> to archive.
A: In order to handle concurrency, V4.0 or above, you can use system.collections.concurrent.concurrentdictionary<string,object> to archive.
B: If you want to support. NET 2.0, you need to implement a locked dictionary yourself (this is the case)
2: to the dictionary to provide additions and deletions to change the method.
3: Provides expiration policy for timed cache.
4: Provide file monitoring policy.
5: Test concurrency, performance, and memory consumption issues.
The following content, focus on my ideas, source code in pieces to provide, specific source code, will be in the link.
1: Custom Thread-safe mdictionary (support. NET 2.0)
If you want to support 2.0, then you can only implement: the idea of implementation is also very simple, as long as the operation is coupled with a lock:
Details source See: Https://github.com/cyq1162/cyqdata/blob/master/Tool/MDictionary.cs
2: Time Expiration Policy:
Private mdictionary<string, object> Thecache = new mdictionary<string, object> (2048, stringcomparer.ordinalignorecase);//key,cache private mdictionary<string, datetime> theKeyTime = new Mdictionary<string, Datetime> (2048, stringcomparer.ordinalignorecase);//key,time
With Thekeytime, at each fetch cache, the time can determine whether the key is not, if it has expired, then discard.
But there is a problem, if the cache has expired, but has not been called, that is not always there?
To solve this problem, a timer is required to periodically clean out the expired cache.
Since the cache has been designed as a singleton, you can start a thread in the constructor to do a timed task to clean out the expired cache.
Here are two strategies, previous, and present, I will introduce: the previous:
Periodically traverse the thekeytime to find the expiration time of the cache to delete.
Because the collection cannot be modified or deleted during the traversal, the traversed qualifying archive is archived to the new object, and the new object is uniformly processed to clear.
Pros: Logic is simple.
Cons: The process of traversal, the cache can not be modified, need to lock (the more cached objects, the longer the lock time), and each time to traverse all.
Today's:
Private Sorteddictionary<int, mlist<string>> thetime = new Sorteddictionary<int, MList<string> > (//worktime,keylist);
A new time slice dictionary is added, with a fixed time (such as 5 minutes) for 1 units.
In this way, all the cached time is distributed in an orderly manner on these time slices, and the timer is only processed by the rhythm.
All keys are recorded for each time slice.
Cons: Add processing logic.
Pros: Expiration policies no longer have locks and can quickly and directly locate expired data and clear it.
3: About the performance of list
"At first my idea was to list<key> keys to archive all keys, remove the key only, and then give it to the timer to clean it up."
Due to the fact that it is thread-safe only, the result is obvious when performing a performance test "
The list is a linked list, so the performance of the contains method degrades as the amount of data increases.
Therefore, the need to deal with a simple solution to the performance problem, temporarily tossing a mlist:
Internal classMlist<t>{List<T>list; Dictionary<t,int>dic; Publicmlist () {list=NewList<t>(); DiC=NewDictionary<t,int>(); } PublicMlist (intnum) {List=NewList<t>(num); DiC=NewDictionary<t,int>(num); } Public voidAdd (T key) {dic. ADD (Key,0); List. ADD (key); } Public BOOLContains (T key) {returnDiC. ContainsKey (key); } Public voidRemove (T key) {dic. Remove (key); List. Remove (key); } Public voidClear () {dic. Clear (); List. Clear (); } Public intCount {Get { returnlist. Count; } } PublicList<t>GetList () {returnlist; } }
4: File cache dependency Policy:
This, in short, is how to make the cache expire automatically when the file is modified.
The reason I want to support this strategy is because TAURUS.MVC, the HTML that is loaded on the view is cached in memory, and when the HTML is modified, it needs to be reflected in time to clear out the cache and reload.
Privatemdictionary<string,string> thefilename =Newmdictionary<string,string> ();//Key,filename Privatemdictionary<string, filesystemwatcher> thefolderwatcher =Newmdictionary<string, filesystemwatcher> ();//Folderpath,watch Privatemdictionary<string, mlist<string>> Thefolderkeys =Newmdictionary<string, mlist<string>> ();//folderpath,keylist
Key explanations:
1: Use FileSystemWatcher to do file monitoring (found in. NET core support for this class)
2: The problem: At first, also think very simple, each file to open a monitoring is done, the result is not so simple:
There are too many A:filesystemwatcher objects and performance drops quickly. B: Different keys point to the same path problem.
3: Solve: Later, think of the monitoring is a folder unit, then through the folder to engage in implementation:
A: In folders: As a result, file objects can be reduced a lot and improve performance issues. B: In folders: Can summarize the corresponding keys, when the file changes, you can quickly locate the file.
5: Concurrency:
After a cache class is written, testing is unavoidable, especially concurrency, after all, the cache is a highly concurrent operation.
Therefore, the cache where to add lock, which can not add, all need to think carefully.
The test was passed, and it was not.
6: Performance:
Performance testing is done by comparison with Httpruntime.cache.
1 million-time insertion:
1 million-time removal:
7: Memory Consumption:
No tests are currently in.
Detailed Source code:
Https://github.com/cyq1162/cyqdata/blob/master/Cache/LocalCache.cs
Summarize:
Originally planned to write this article yesterday, the results of a temporary training class, so only late at night to write this article.
About training See: http://www.cnblogs.com/cyq1162/p/6097445.html
In the course of training, everyone asked how to improve the technology? I answer: Build wheels.
In addition, someone asked me how to look at. NET core, how can I see, pull the bench, just waiting for you:. NET Core 2.0.
Night and deep, it's time to sleep ~ ~ ~
asp: Write a full cache cache class to support. NET Core