. Net Cache Management Framework CacheManager,. netcachemanager
Cache is a widely used concept in the computer field. In the hardware, the CPU has a level-1 cache, a level-2 cache, a browser cache, and distributed cache memcache and redis in software development. The reason why cache is everywhere is that it can greatly improve the speed of running hardware and software. In project development, slow performance often occurs when I/O operations are frequent, and reading databases is a common performance consumption. At this time, if you cache frequently used data to a medium that can be read at high speed, you do not need to request the database to directly obtain the required data from the cache next time, to greatly improve the performance. This article mainly discusses how to use the CacheManager framework to conveniently manage the cache in the project in. Net development.
1. Introduction and advantages of CacheManager
CacheManager is an open-source. Net Cache Management Framework. It is not a specific cache implementation, but a cache. It facilitates developers to configure and manage different caches and provides a unified cache interface for upper-layer applications.
The following are some advantages of CacheManager:
- This makes it easier for developers to process and allocate resource caches, even for complex caching solutions.
- CacheManager can manage multiple caches, including memory, appfabric, redis, couchbase, windows azure cache, and memorycache.
- Provides additional functions, such as cache synchronization, concurrent updates, events, performance counters, etc...
II,
Start CacheManager
CacheManager is easy to use. The following uses a memory cache combined with an instance of CacheManager to help us quickly get familiar with how to use CacheManager.
Create a Console Application in Visual Studio.
Use Nuget to add a CacheManager package reference for the project. CacheManager contains a lot of packages. Among them, CacheManager. Core is required, and others have different packages for different cache platforms.
In this Demo, we use memory as cache, so we only need CacheManager. Core and CacheManager. SystemRuntimeCaching.
Configure our cache in the Main function:
1 using System; 2 using CacheManager.Core; 3 namespace ConsoleApplication 4 { 5 class Program 6 { 7 static void Main(string[] args) 8 { 9 var cache = CacheFactory.Build("getStartedCache", settings =>10 {11 settings.WithSystemRuntimeCacheHandle("handleName");12 });13 }14 }15 }
The code above uses CacheFactory to create a Cache instance named getStartedCache, which uses SystemRunTime Cache and memory Cache. One cache instance can be configured with multiple Handle. We can use memory as the storage medium, or Redis distributed cache as the storage medium, and can be used in one cache instance at the same time, the configuration and usage of multi-level cache will be introduced later.
Next, we will add some code to test the cache.
1 static void Main(string[] args) 2 { 3 4 var cache = CacheFactory.Build("getStartedCache", settings => 5 { 6 settings.WithSystemRuntimeCacheHandle("handleName"); 7 }); 8 9 cache.Add("keyA", "valueA");10 cache.Put("keyB", 23);11 cache.Update("keyB", v => 42);12 Console.WriteLine("KeyA is " + cache.Get("keyA")); // should be valueA13 Console.WriteLine("KeyB is " + cache.Get("keyB")); // should be 4214 cache.Remove("keyA");15 Console.WriteLine("KeyA removed? " + (cache.Get("keyA") == null).ToString());16 Console.WriteLine("We are done...");17 Console.ReadKey();18 }
Iii. CacheManager multi-level cache Configuration
In actual development, we often need to use multi-level caching.
A common situation is that you have a distributed cache server, such as redis. An independent Cache Server allows multiple system applications to share the cached data, because the creation of these cache items is expensive.
Compared with accessing the database, the distributed cache speed is faster, but it is not faster than the memory speed. The distributed cache also consumes time for serialization and network transmission.
In this case, a hierarchical cache is a good solution. The memory cache is used in combination with the distributed cache, and frequently used data is directly read from the memory, this will greatly improve the overall performance of the application.
The read speed of the memory cache can reach 100 times or even higher than that of the distributed cache.
Using CacheManager to configure multi-level caching is very easy.
1 var cache = CacheFactory. build <int> ("myCache", settings => 2 {3 settings 4. withSystemRuntimeCacheHandle ("inProcessCache") // memory cache Handle 5. and 6. withRedisConfiguration ("redis", config => // Redis cache configuration 7 {8 config. withAllowAdmin () 9. withDatabase (0) 10. withenderson point ("localhost", 6379); 11}) 12. withMaxRetries (1000) // number of attempts 13. withRetryTimeout (100) // retry timeout 14. withRedisBackPlate ("redis") // redis uses Back Plate15. withRedisCacheHandle ("redis", true); // redis cache handle16 });
In the above Code, the memory cache and Redis cache configurations are easy to understand. But what is the role of BackPlate? Next, let's look at the BackPlate baffle mechanism in CacheManager.
4. BackPlate solves the synchronization problem in distributed cache
Large software systems are often divided into many independent sub-projects. To save costs or facilitate data sharing, each sub-project often shares the same distributed cache server. In this way, data inconsistency may occur when multi-level caching is used.
Suppose that A data item in the cache is updated in system A. At this time, CacheManager will update and modify the data in all the cache handle set by A, which also includes the data in the distributed cache. However, there will still be old unupdated data in the memory cache of system B. When system B retrieves this record from the cache, the data in the memory cache is inconsistent with that in the distributed cache.
To prevent this, the cache manager has a function called cachebackplate that will attempt to synchronize the cache in multiple systems.
In the multi-level cache set above, we use redis as the BackPlate source. That is to say, all data must be modeled based on the cached data in redis.
After redis is set as the BackPlate, when the above data is inconsistent, as long as the data in redis is modified, it will trigger CacheManager to update the data in the memory cache of all systems, it is consistent with the data in redis.
How is synchronization completed?
Each time a Cache record is deleted or updated, the Cache Manager sends a message for the BackPlate to store the data change information. All other systems will receive these messages asynchronously and update and delete the messages accordingly to ensure data consistency.
V. ExpirationMode and CacheUpdateMode
When caching is involved, the cache will inevitably expire. CacheManager provides some simple cache expiration methods.
1 public enum ExpirationMode2 {3 None = 0,4 Sliding = 1,5 Absolute = 2,6 }
At the same time, CacheManager also sets different data update policies for multi-level caches.
1 public enum CacheUpdateMode2 {3 None = 0,4 Full = 1,5 Up = 2,6 }
With Sliding and Up, we can set different cache expiration times for multi-level caches so that frequently used data can be stored in memory with faster access speeds, the requests are frequently stored in the distributed cache. When CacheManager cannot find the cached data in the memory, it will try to find the data in the distributed cache. After the preceding settings are found, the cached data is saved to the memory cache.
The specific configuration method is as follows:
1 var cache = CacheFactory. build <int> ("myCache", settings => 2 {3 settings. withUpdateMode (CacheUpdateMode. up) 4. withSystemRuntimeCacheHandle ("inProcessCache") // memory cache Handle 5. withExpiration (ExpirationMode. sliding, TimeSpan. fromSeconds (60) 6. and 7. withRedisConfiguration ("redis", config => // Redis cache configuration 8 {9 config. withAllowAdmin () 10. withDatabase (0) 11. withenderson point ("localhost", 6379); 12 }). 13. withExpiration (ExpirationMode. sliding, TimeSpan. fromHours (24) 14. withMaxRetries (1000) // number of attempts 15. withRetryTimeout (100) // retry timeout value 16. withRedisBackPlate ("redis") // redis uses Back Plate17. withRedisCacheHandle ("redis", true); // redis cache handle18 19 });
Vi. Cache Usage Analysis
In Cache Usage, for the comparison between cached hit and miss data states, these data can help us analyze and adjust cache settings, and help Cache Usage to be more reasonable.
1 var cache = CacheFactory.Build("cacheName", settings => settings2 .WithSystemRuntimeCacheHandle("handleName")3 .EnableStatistics()4 .EnablePerformanceCounters());
After the cache Statistic function is configured, we can track the usage of the cache. The following figure shows the analysis data in each cache handle.
1 foreach (var handle in cache.CacheHandles) 2 { 3 var stats = handle.Stats; 4 Console.WriteLine(string.Format( 5 "Items: {0}, Hits: {1}, Miss: {2}, Remove: {3}, ClearRegion: {4}, Clear: {5}, Adds: {6}, Puts: {7}, Gets: {8}", 6 stats.GetStatistic(CacheStatsCounterType.Items), 7 stats.GetStatistic(CacheStatsCounterType.Hits), 8 stats.GetStatistic(CacheStatsCounterType.Misses), 9 stats.GetStatistic(CacheStatsCounterType.RemoveCalls),10 stats.GetStatistic(CacheStatsCounterType.ClearRegionCalls),11 stats.GetStatistic(CacheStatsCounterType.ClearCalls),12 stats.GetStatistic(CacheStatsCounterType.AddCalls),13 stats.GetStatistic(CacheStatsCounterType.PutCalls),14 stats.GetStatistic(CacheStatsCounterType.GetCalls)15 ));16 }
7. End
Caching is a good thing. Good use can greatly improve performance. The use of cache itself is a big topic. This article only introduces the use of CachManager from the perspective of Cache Management.
Here are the materials and links related to CacheManager:
Official homepage
Http://cachemanager.net/
Source code
Https://github.com/MichaCo/CacheManager
Official MVC project Sample
Https://github.com/MichaCo/CacheManager/tree/master/samples/CacheManager.Samples.Mvc
Recently I have been thinking about the differences between Cache Usage in different situations. For Internet projects, data consistency requirements are often not high. In Cache Management, the focus may be on the cache hit rate. For application systems, access requests are not large, but data consistency requirements are high, and the data update policy in the cache may be more important.
How can we design a cache suitable for the application system? If you are interested, you are welcome to discuss.