Read the catalogue:
- Introduced
- Process Cache
- Communication mode
- Speed comparison
- Summarize
Introduced
Before the children's shoes asked about the first empty question, here simply add:
- In general, the small amount of concurrency, the cache of small amount of data sites let users trigger the page to be cached by themselves.
- Large sites will be deployed in multiple deployments, with load-balanced routing. A common strategy is to have the node removed from the Load Balancer node collection when each machine is published, and the first access is cached by manually or automatically requesting the page, including Precompilation, when the publication is finished.
- There is also a large amount of concurrency, cache data is large, this is the subject of this article, the following will be described in detail.
If the amount of cache data is large, preheating is troublesome. For example, LZ company single in-memory cache size is over g, each warm-up takes a few minutes, assuming that in the application process, operation and maintenance work is very inconvenient. If there is an unexpected cause of process pool recycling, it is catastrophic for the user. Therefore, it is necessary to cache the data of the application process for separate extraction and storage, and to decouple the application.
Solutions for high concurrency including cache update policies can be found in the previous blogs.
Process Cache
In the evolution of the site architecture, this phase requires the introduction of distributed caches, such as memcached, Redis. The advantage is not much to say, the disadvantage is the slow speed. This slow is compared to the native memory cache, the cross-machine communication with the direct read memory difference is not an order of magnitude, for high concurrency, the operation of the data is not applicable.
Therefore, the application process cached data out, placed in a separate process, to provide a layer of cache for the application. The cached business logic, concurrent processing is done in a stand-alone process, interacting with process communication. This not only solves the problem of large amount of data preheating, but also can decouple some applications of the business.
Separate processes can also be used externally, for example, in the form of a WCF service to other subsystems.
The disadvantage is that cross-process reads are slightly slower than in-process reads.
Communication mode
Several common ways of communicating with independent processes and application processes:
Namedpipe
Namedpipe is a relatively efficient process communication method that supports intra-LAN communication.
Service side:
vartxt = file.readalltext ("B.txt"); Serverpipeconnection pipeconnection=NewServerpipeconnection ("Mypipe", +, +, the,false); Console.WriteLine ("listening ."); while(true) { Try{pipeconnection.disconnect (); Pipeconnection.connect (); stringRequest =Pipeconnection.read (); if(!string. IsNullOrEmpty (Request)) {pipeconnection.write (TXT); if(Request. ToLower () = =" Break") Break; } } Catch(Exception ex) {Console.WriteLine (ex). Message); Break; }} pipeconnection.dispose (); Console.Write ("Press any key to exit ."); Console.read (); View Code
Client side:
New Clientpipeconnection ("mypipe" ". ");
Clientconnection.connect ();
Var
Clientconnection.close ();
WCF Namedpipe
WCF has been packaged in the native namedpipe for easier and easier use.
Service side:
New typeof
var New
Host. AddServiceEndpoint (typeof"net.pipe://localhost/cacheservice"
Console.WriteLine (" service available ");
Console.ReadLine ();
Client side:
New ChannelFactory (newnew endpointaddress ("net.pipe://localhost/cacheservice " Pipefactory.createchannel (); var obj=pipeproxy.getval ();
Sharedmemory
Shared memory is the fastest way to communicate between processes, and data does not need to be replicated between processes, creating a common memory for other processes to read and write.
Service side:
var mmf = memorymappedfile.createfromfile (@ "a.txt""Cachea " ); Console.ReadLine (); Mmf. Dispose ();
Client side:
var mmf = memorymappedfile.openexisting ("cachea"var accessor = MMF. Createviewaccessor (02000000);
Accessor. ReadChar (accessor); Dispose (); Mmf. Dispose ();
WCF TCP Mode
Using WCFTCP, it can be used by the external network.
Service side:
var New typeof (Cacheservice)); Host. AddServiceEndpoint (typeofnew"net.tcp://192.168.0.115:8057/cacheservice/ "); host. Open (); Console.WriteLine (" service available "); Console.ReadLine (); host. Close ();
Client side:
New ChannelFactory (nettcpbindingbinding,new endpointaddress ("net.tcp://192.168.0.115:8057/ cacheservice/")); Icacheservice tcpproxy= netcpfactory.createchannel (); var obj=tcpproxy.getval ();
Speed comparison
It is the mean value of each of the 100 transmission tests of 13M and 1M text data running on Windows7 i5-3230cpu.
The native namedpipe is already very fast, and in an acceptable range, the way to share memory is faster.
The test results show that WCF's namedpipe is slower than wcf-tcp, which is a bit of a surprise.
Wcftcp is tied to a reserved address:
New New EndpointAddress ("net.tcp://192.168.0.115:8057/cacheservice/"));
Wcftcp localhost is tied to 127.0.0.1:
New New EndpointAddress ("net.tcp://localhost:8057/cacheservice/"));
Summarize
In the development of large Web sites, caching is a topic that can never be avoided, and there is no solution that solves all problems.
The problem that the cache development process often encounters: expiration policy (lazy), Cache Update (standalone), multilevel cache, distributed cache (shard), high Availability (single point), high concurrency (avalanche), hit ratio (penetration), cache culling (LRU), etc.
The hierarchical relationship of its multilevel caching scheme is mostly the result of distributed (Redis)->db, the process-level file (static resource), which is the browser->cdn-> reverse proxy cache, and the thread-level memory level.
Most of the content in the LZ in front of the blog has been introduced, interested in children's shoes can see.
Reference Resources
[1] Http://www.codeproject.com/Articles/7176/Inter-Process-Communication-in-NET-Using-Named-Pip
All those years we've been chasing. Cache Notation (iv)