Introduction
For the current computer's CPU and network facilities, for the application, the performance of the bottleneck is not the length of the code (to achieve the same function) and bandwidth, but the code to access resources, namely: the main culprit to slow down our program is the IO operation.
The program reads data from the hard disk is a very time-consuming operation, because we are now using the hard disk is mechanical, you want to machine the speed of operation and the speed of electricity, that is a level of players?
In order to solve the bottleneck of the program, people put forward an idea: to use space in exchange for time. It takes a long time for the program to access the hard disk, so the data is put into memory so that the program accesses the memory, which saves time. This really leaves us with the waiting time for our program to get the data, but our memory is occupied.
We all know that the memory of this resource for the computer is very scarce and limited (so much), if our computer's memory resources are used by these data, it will certainly affect the operation of our program, you think, if the program needs a memory space at this time, the computer will react, needless to say , make sure to do virtual memory processing, what is virtual memory? The space on the hard disk, you see, we spend a half a day to go around this bottleneck, this does not outweigh the cost.
Of course, the above scenario is that the program and cache data share a computer's memory, if this program uses less people, then the program and cache data on a computer only positive impact, no negative impact, but if our program is promoted out, and is accepted by all people, Cause our server to have 100,000 users per minute of traffic, at this time, the above situation will seriously affect the efficiency of the program.
In order to solve the above problems, the distributed cache, the data on the hard disk, the memory of the other computer (not the computer running the program), and the number of computers that can be cached is more than one, can be n. This is the main content of this blog: Distributed cache Memcache (presenter) and Redis (do a brief)
Operation Flow
Download the Memcache server, start the service, download the driver for the. NET platform, add a driver reference to the application, write a program
specific operation (Memcache)
Download the mecached server (Win version), install and start the server. Open a command window, switch to the memcached file directory, and then enter: Memcached.exe–d Install and memcached.exe–d start, the service is installed and started. Introduce the appropriate driver (DLL)
The programming code is as follows:
staticVoidmain (string[] args)
{
// composition memcache cluster
//string[]servers = { "127.0.0.1:11211","10.0.9.20:11211" }; cluster
string[]servers = { "127.0.0.1:11211"};
// Initialize object pool
Sockiopool pool = sockiopool.getinstance ();
//sockiopoolpool = sockiopool.getinstance ("test1");
Pool. Setservers (servers);
Pool. Initconnections = 3;
Pool. Minconnections = 3;
Pool. MaxConnections = 5;
Pool. socketconnecttimeout = 1000;
Pool. Sockettimeout = 3000;
Pool. Maintenancesleep = 30;
Pool. Failover = true;
Pool. Nagle = false;
// Generate Object
Pool. Initialize ();
// Client instance
memcachedclient mc = New memcachedclient ();
// do not compress
Mc. EnableCompression = false;
//MC. Poolname= "test1";// Gets the object for the specified object pool
Mc. Set ("Key1","Value1",DateTime.Now.AddMinutes (Ten));
objectobj = MC. Get ("Key1");
Console.WriteLine (obj);
Console.readkey ();
}
Cluster IssuesMemcache The principle of storing data
First, the key is a hashing algorithm, get the hash value, the hash value divided by the number of Memcache servers, take the remainder, the data storage server
Consistent Hash principle
Temporarily add a memcache server, the server before Key-value will change, at this time to take the value before the problem; At this point, we can use the consistency hash principle: Each server corresponds to a certain value, at this time to add a server, there is a summary of errors will be reduced, but , there is no way to cure
Memory Storage ManagementMemory allocation
Memcache the memory into blocks of different sizes, and when the data comes up, find a minimum match up block to store the data
Memory Usage
Use CAS (cas:clientassessservice) protocol instead of locking to solve multiple concurrent interview problems
When the client reads the data, obtains a cas-id, when writes the cache, first checks the client Cas-id and the server side Cas-id (the last client accesses the Cas-id) whether is same, the same, may modify, otherwise, does not allow the modification, This is basically the same principle as the version control we're using.
Memory RecyclingLazy Expiration (lazy detection)
Check that data expires when the user obtains data through key, with a maximum expiration time of 30d
LRU (Least recentlyused)
Algorithm: the least recently used algorithm. Idle memory-"Expired memory-" minimum use
the difference between memcache and Redis
are distributed cache, the official Linux version, but do not provide win version, can be implemented in the cluster, but the Memecache is in the client settings, and Redis is server-side settings; Memcache provides a single type of storage, Redis offers a richer variety of types ; it provides a variety of language drivers.
Summary
Memcache and Redis servers are all embodied in the idea of distributed caching, which enables our web applications to be efficiently used by more users.
Distributed Cache Memcache and Redis