Means of Implementing distributed cache synchronization under. Net

Source: Internet
Author: User
Tags keybase

Not long ago, I wrote an article about it. some Problems of the distributed cache under. Net, combined with the Implementation Mode in DNT, express some of their own views. Recently, some new experiences have been learned through learning related things, the second part of the article is written here as a distributed cache column.

In fact, the Scale-Up or Scale-Out is nothing more than Scale-Up. Microsoft's opinion on this is that an App's cache should be read and written based on its own physical boundary, rather than place it elsewhere, the problems may be the serialization and transmission of objects, deserialization, network connection overhead, and cross-process overhead of objects, which cannot be ignored for high-performance websites. to consider these factors, we recommend that you not deploy multiple applications together on multiple servers, but divide an App into several small applications and deploy them on different servers, the diagram below shows this relationship, so that each application can independently manage its own cache data, but there are still many common data such as user data.
For the two solutions to solve the cache synchronization problem, someone has come up with a solution. Refer to the two solutions. This is MS. Here is another one. Let's take a look at Peter's, the main idea is to add the CacheControlItem package to the cache. Download the code from the original source.

Each machine maintains a list of member servers in the WebFarm. If a new server comes in and finds that it is not in this list, it notifies other servers to add it to this list. In this way, server A needs to insert A new cache value, which inserts the project into its own cache, then use it to initialize a CacheControlItem, specify its Cache Policy (priority, cache survival time), set its action, that is, add, then, serialize the image and send it to each member server through the Web. The server that receives the data and the Action Command of the object are as follows, add the deserialized object to your own cache so that the synchronization process is complete. The process of removing the cached object is similar, but the object does not need to be transmitted, you only need to set its Action to delete in the packaging class. of course, in order to improve the performance, an asynchronous listening thread can be implemented specifically to respond to cache Notification requests. in general, the efficiency of this method is relatively low. When the data volume is large, it may cause a large amount of cached data to be synchronized to the data stream.

Let's take a look at how MS works. The idea is similar, but it does not sync cache data between WebFarm's member servers. Instead, it only ensures that each machine does not read invalid cache data, when cache data becomes invalid (dependent data is updated), the WebService on each server is called in sequence by maintaining a list of servers to be notified, if this key value is cached on the current server, it becomes invalid.

The two foreigners seem to be more familiar with writing things, but they are more friendly for beginners. They can learn the ins and outs of the things step by step and understand them more clearly.

How fast is Memcached? If you are not satisfied with this, try Memcached. It runs on the Win32 platform. We mentioned this in the previous article, but it is in. what is the performance of the Net platform? Is it as good as it is on the PHP platform? Now let's do a simple test and compare it. net comes with the Cache and Memcached implementation methods to see the gap, the process is like this, respectively generate 10000 string objects and set the key values to insert into the Cache, then, you can obtain the total time spent. server: memcached-1.2.1-win32, client: memcacheddotnet_clientlib-1.1.5, server is also relatively simple to use, decompress the file and enter: c: \ memcached-d install in the command line first install the service, then c: \ memcached-d start. For detailed usage instructions, see the instruction file-h for help. The test environment is as follows:

Memcached server: Win2003 sp1, Framework 2.0, P4 D 3.4 GB, 768 MB memory, Gigabit Nic.
Memcached client: Win2003 sp1, Framework 2.0, T2060, 1 GB memory (saradb for Shenzhou Notebook;), Gigabit Nic.
The two machines are connected through a direct connection.

. Net Cache standalone test: P4 D 3.4 GB, 768 MB memory.

Test result: the time for accessing 10000 entries:

Memcached
Set (second) 1.48 1.37 1.48 1.37 1.46
Get (second) 2.42 2.42 2.42 2.43 2.42

HttpRuntime. Cache
Set (second) 0.015 0.015 0.015 0.015 0.015
Get (seconds)
0.015 0.015 0.015 0.015 0.015

. Net built-in Cache test code HttpRuntime. Cache
Protected void Page_Load (object sender, EventArgs e)
{
Int start = 200;
Int runs = 10000;

String keyBase = "testKey ";
String obj = "This is a test of an object blah es, serialization does not seem to slow things down so much. the gzip compression is horrible performance, so we only use it for very large objects. I have not done any heavy benchmarking recently ";

Long begin = DateTime. Now. Ticks;
For (int I = start; I <start + runs; I ++)
{
HttpRuntime. Cache. Add (keyBase + I, obj, null, System. Web. Caching. Cache. NoAbsoluteExpiration,
TimeSpan. FromMinutes (1), System. Web. Caching. CacheItemPriority. Normal, null );
}
Long end = DateTime. Now. Ticks;
Long time = end-begin;

Response. Write (runs + "sets:" + new TimeSpan (time). ToString () + "ms <br/> ");
Begin = DateTime. Now. Ticks;
Int hits = 0;
Int misses = 0;
For (int I = start; I <start + runs; I ++)
{
String str = (string) HttpRuntime. Cache. Get (keyBase + I );
If (str! = Null)
++ Hits;
Else
++ Misses;
}
End = DateTime. Now. Ticks;
Time = end-begin;

Response. Write (runs + "gets:" + new TimeSpan (time). ToString () + "ms ");
}
Memcached test code: Memcached
Namespace Memcached. MemcachedBench
{
Using System;
Using System. Collections;
Using Memcached. ClientLibrary;
Public class MemcachedBench
{
[STAThread]
Public static void Main (String [] args)
{
Int runs = 100;
Int start = 200;
If (args. Length> 1)
{
Runs = int. Parse (args [0]);
Start = int. Parse (args [1]);
}

String [] serverlist = {"140.192.34.72: 11211", "140.192.34.73: 11211 "};

// Initialize the pool for memcache servers
SockIOPool pool = SockIOPool. GetInstance ();
Pool. SetServers (serverlist );

Pool. InitConnections = 3;
Pool. MinConnections = 3;
Pool. MaxConnections = 5;
Pool. SocketConnectTimeout = 1000;
Pool. Fig = 3000;

Pool. MaintenanceSleep = 30;
Pool. Failover = true;

Pool. Nagle = false;
Pool. Initialize ();

MemcachedClient mc = new MemcachedClient ();
Mc. EnableCompression = false;

String keyBase = "testKey ";
String obj = "This is a test of an object blah es, serialization does not seem to slow things down so much. the gzip compression is horrible performance, so we only use it for very large objects. I have not done any heavy benchmarking recently ";

Long begin = DateTime. Now. Ticks;
For (int I = start; I <start + runs; I ++)
{
Mc. Set (keyBase + I, obj );
}
Long end = DateTime. Now. Ticks;
Long time = end-begin;

Console. WriteLine (runs + "sets:" + new TimeSpan (time). ToString () + "ms ");

Begin = DateTime. Now. Ticks;
Int hits = 0;
Int misses = 0;
For (int I = start; I <start + runs; I ++)
{
String str = (string) mc. Get (keyBase + I );
If (str! = Null)
++ Hits;
Else
++ Misses;
}
End = DateTime. Now. Ticks;
Time = end-begin;
Console. WriteLine (runs + "gets:" + new TimeSpan (time). ToString () + "ms ");
Console. WriteLine ("Cache hits:" + hits. ToString ());
Console. WriteLine ("Cache misses:" + misses. ToString ());

IDictionary stats = mc. Stats ();
Foreach (string key1 in stats. Keys)
{
Console. WriteLine (key1 );
Hashtable values = (Hashtable) stats [key1];
Foreach (string key2 in values. Keys)
{
Console. WriteLine (key2 + ":" + values [key2]);
}
Console. WriteLine ();
}
SockIOPool. GetInstance (). Shutdown ();
}
}
}

Conclusion through this comparative test, we can see that the built-in Cache is about 130 times faster than Memcached, but the overall speed is not slow, these two methods can be selectively combined in the project to produce great results. in addition, Memcached can use a large amount of memory, and it can also be used as a cluster to avoid single point of failure.

This article from the CSDN blog, reproduced please indicate the source: http://blog.csdn.net/canduecho/archive/2008/01/26/2067507.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.