Distributed Cache Description :
Distributed cache focus on the distribution, I believe that we have been exposed to a lot of distributed, such as distributed development, distributed deployment, distributed locks, things, systems and so on a lot. So we have a clear understanding of the distribution itself, distributed is a number of applications, may be distributed on different servers, and ultimately to provide services to the web side.
Distributed caching has the following advantages:
(1) All the cached data on the Web server is the same, not because the application is different, the different server causes the cache data not the same.
(2) The cache is independent of the impact of Web server restarts or deletions, meaning that these web changes do not result in changes to the cached data.
Traditional monolithic application architecture because the user's traffic is not high, the existence of the cache is most of the information that stores the user, as well as some pages, most of the operations are directly and DB Read and write interaction, this simple architecture, also known as simple architecture,
Traditional OA projects such as ERP,SCM,CRM and other systems because the user volume is not only because of the majority of business reasons, monolithic application architecture is still a very common architecture, but some systems with the increase in user volume, expansion of business, resulting in the emergence of DB bottleneck.
The following two types of treatments I have learned about this situation
(1): When the user access is small, but the amount of data read and write is very large, we generally take a read-write separation of the DB, a master multi-slave, the way to upgrade the hardware to solve the db bottleneck problem.
Such a disadvantage is also pure in:
1, how to do when the user volume is large? ,
2, for the performance of limited promotion,
3, the price is not high. It takes a lot of cost to improve a bit of performance (for example, the current I/O throughput is 0.9 to 1.0, and we have a really good price in case of an increase in machine configuration)
(2): When the number of user visits also increased, we need to introduce the cache to solve, a picture describes the approximate role of the cache.
The cache is mainly for the infrequently changed and access to a large amount of data, the DB database can be understood as data only cured or only to read the data often changed, I did not draw set operation, is to deliberately explain that the existence of the cache can be used as a temporary database, We can synchronize the cache and the data in the database by the way of timed tasks, and the benefit is that we can transfer the pressure of the database to the cache.
The presence of the cache solves the problem of database stress, but when the following occurs, the cache is not functioning, cache penetration, cache Breakdown , cache Avalanche , three of the cases.
Cache penetration : In our program, we usually take the cache in the cache to query the cache data we want, if the cache does not exist in the data we want, the cache is lost to use ( cache invalidation ) We need to reach out to the DB Library to get the data, This time this kind of action too much database crashes, this situation needs us to prevent. For example, we get a user information from the cache, but deliberately enter a user's information that does not exist in the cache, which bypasses the cache and transfers the pressure back to the data. For this problem we can take, the first access to the data cache, because the cache can not find user information, the database also query user information, this time to avoid repeated access we cache this request, the pressure back to the cache, some people will have doubts, When the parameters that are accessed are tens of thousands of parameters that are non-repeating and can be dodged from the cache, we also save the data to set a shorter expiration time to clean up the cache.
Cache Breakdown : This is the case, for some cache key set expiration time, at the time of expiration, the program is highly concurrent access ( cache invalidation ), this time using mutexes to solve the problem,
Mutual exclusion Lock principle : The popular description is that 10,000 users access, but only one user can get access to the database, when the user to re-create the cache, this time the rest of the visitors because they do not have access to the cache in situ waiting to access.
never expire : Someone will think, I do not set the expiration time is not OK? Yes, but this is also a disadvantage, we need to periodically take the update cache, this time the cache data comparison delay.
Cache Avalanche : refers to a variety of cache settings at the same time expires, this time the large number of data access, ( cache invalidation ) db of the pressure came up again. Workaround when setting the expiration time, add a random number based on the expiration time as much as possible to ensure that the cache does not fail large areas of colleagues.
use in Aspnetcore
Redis
implementing the cache:
References in the project: using Microsoft.Extensions.Caching.Distributed; Using Idistributedcache
Idistributedcache interface
The Idistributedcache interface contains synchronous and asynchronous methods. Interfaces allow you to add, retrieve, and delete items in a distributed cache implementation. The Idistributedcache interface contains the following methods:
Get, Getasync
Retrieves a cache entry in the form of a string key and byte[] (if found in the cache).
Set, Setasync
Use the string key to add an item byte[] form to the cache.
Refresh, Refreshasync
Refreshes the item in the cache according to the key and resets its adjustable expiration time-out value, if any.
Remove, Removeasync
Deletes a cache entry by key.
Here is my Code encapsulation Distributedcache class name: primarily for Idistributedcache non-async methods, asynchronous writes only a simple example:
1,get() get Cache
/// <summary> ///Get Cache/// </summary> /// <param name= "key" ></param> /// <returns></returns> Public ObjectGet (stringkey) { stringReturnstr =""; if(!string. IsNullOrEmpty (key)) {if(Exists (key)) {Returnstr=Encoding.UTF8.GetString (_cache. Get (key)); } } returnReturnstr; }
2.
Getasync
() get the cache asynchronously
/// <summary> ///using asynchronous get cache information/// </summary> /// <param name= "key" ></param> /// <returns></returns> Public Asynctask<Object> Getasync (stringkey) { stringReturnstring =NULL; varValue =await_cache. Getasync (key); if(Value! =NULL) {returnstring=Encoding.UTF8.GetString (value); } returnreturnstring; }
3. Set () Sets or adds a cache
/// <summary> ///Add Cache/// </summary> /// <param name= "key" ></param> /// <param name= "value" ></param> /// <returns></returns> Public BOOLADD (stringKeyObjectvalue) { byte[] val =NULL; if(value.) ToString ()! ="") {Val=Encoding.UTF8.GetBytes (value. ToString ()); } distributedcacheentryoptions Options=Newdistributedcacheentryoptions (); //set absolute expiration time two waysOptions. absoluteexpiration = DateTime.Now.AddMinutes ( -); //options. Setabsoluteexpiration (DateTime.Now.AddMinutes (30)); //set sliding expiration time two waysOptions. slidingexpiration = Timespan.fromseconds ( -); //options. Setslidingexpiration (Timespan.fromseconds (30)); //Add Cache_cache. Set (Key, Val, options); //Refresh Cache_cache. Refresh (key); returnExists (key); }
4.Remove () Delete cache
/// <summary> ///Delete Cache/// </summary> /// <param name= "key" ></param> /// <returns></returns> Public BOOLRemove (stringkey) { BOOLReturnbool =false; if(Key! =""|| Key! =NULL) {_cache. Remove (key); if(Exists (key) = =false) {Returnbool=true; } } returnReturnbool; }
5. Modify the Cache
/// <summary> ///Modify Cache/// </summary> /// <param name= "key" ></param> /// <param name= "value" ></param> /// <returns></returns> Public BOOLModify (stringKeyObjectvalue) { BOOLReturnbool =false; if(Key! =""|| Key! =NULL) { if(Remove (key)) {Returnbool=Add (key, value. ToString ()); } } returnReturnbool; }
6. Verify that the cache is present
/// <summary> ///Verify that there is/// </summary> /// <param name= "key" ></param> /// <returns></returns> Public BOOLExists (stringkey) { BOOLReturnbool =true; byte[] val =_cache. Get (key); if(val = =NULL|| Val. Length = =0) {Returnbool=false; } returnReturnbool; }
The above is the encapsulation of the code: The following is called in Aspnetcore
1. First install the Redis cache: This online to find a download installation on the line
2. Then download and install: Client tool: Redisdesktopmanager (easy to manage)
3. Reference Microsoft.Extensions.Caching.Redis in our project NuGet
4. Register in Project startup Setup.cs: Redis Service: Code as follows
Public voidconfigureservices (iservicecollection services) {//to add the Redis distributed cache service to a serviceServices. Adddistributedrediscache (options = { //The configuration configuration.getconnectionstring ("Redisconnectionstring") that is used to connect Redis reads the string of configuration informationOptions. Configuration ="localhost";//configuration.getconnectionstring ("redisconnectionstring"); //Redis Instance Name RedisdistributedcacheOptions. INSTANCENAME ="Redisdistributedcache"; }); Services. Addmvc (); }
5, I use COREAPI to write the controller in the code as follows:
1) First Instance object
Private Distributedcache _cache; /// <summary> /// Construction Injection // </summary>// <param name= "Cache" ></param>public valuescontroller (idistributedcache Cache) { New Distributedcache (Cache); }
2) Call the method code in the Distributedcache class as follows:
[HttpGet ("{ID}")] Public stringGet (intID) {//Add BOOLBooladd = _cache.add ("ID","sssss"); //Validation BOOLBoolexists = _cache.exists ("ID"); //Get Objectobj = _cache.get ("ID"); //Delete BOOLBoolremove = _cache.remove ("ID"); //Modify BOOLBoolmodify = _cache.modify ("ID","ssssssss"); returnobj. ToString (); }
Original address
Aspnetcore Redis for distributed caching