English original: http://msdn.microsoft.com/zh-cn/library/cc511588 (en-us). aspx
The Enterprise Library Cache Application Block allows developers to merge a local cache in an application that supports in-memory caches, and optionally back-end storage that can be either database storage or isolated storage. The application block can be used without modification, and it provides all the functionality necessary to acquire, add, and remove cached data. Configurable expiration and purge policies are also part of the application block. When building an enterprise-wide application, the architecture and developers face many challenges, and caching can help them overcome these challenges, including the following:
- Performance. Caching improves application performance by storing the corresponding data that the data consumer shuts down as close as possible, which avoids the creation, processing, and transmission of duplicate data.
- Scalability. Storing information in the cache facilitates resource savings and increases scalability as application requirements increase.
- Usability. Storing data in the cache may allow applications to survive system failures, such as network latency, Web service problems, and hardware errors.
The common scene cache Application Block is suitable for the following sexual conditions:
- There must be repeated access to the statement data or infrequently changed data.
- Data access is expensive in creation, access, or transport.
- The data must always be available, even when the source, such as the server is unavailable.
You can use the cache application block in the following application types:
- Windows Forms
- Console Application
- Windows Services
- COM + server
- An ASP. NET Web application or Web service, if the required attributes are not included in the ASP. NET Cache.
The cache Application block is deployed in a separate application domain, and each application domain can have one or more caches, either with or without back-end storage. The cache cannot be shared in different application domains. The cache application Block optimizes performance and is thread-safe and exceptionally secure. It can be extended to include its own expiration policy and back-end storage.
sample application code The following code shows how to add an entry to the cache and remove an entry from the cache. It creates an object of the Product type, and then adds it to the cache, together with a purge priority of 2, an instruction that does not flush it after the entry expires, and a validity period of 5 minutes from the date the entry was last accessed.
Icachemanager Productscache = Cachefactory.getcachemanager ();
String id = "Productoneid"; String name = "Productxyname"; int price = 50;
Product Product = new product (ID, name, price);
Productscache.add (product. ProductID, product, Cacheitempriority.normal, NULL, New Slidingtime (Timespan.fromminutes (5)));
Retrieve the item. Product = (product) productscache.getdata (ID);
Highlights of the cache application Block
The application block for the Enterprise Library cache includes the following features:
- You can use the Graphical Enterprise Library Configuration tool to manage configurations.
- You can set up a persistent storage location that uses isolated storage or an enterprise Library data Access Application Block whose state is synchronized with in-memory cache.
- You can extend the application block by creating custom expiration policies and storage locations.
- You can get the assurance that the application blocks execute in a thread-safe manner.
Deciding when to use the cached application Block
The goal of the design of the cached application block is when the application and the cache exist in the same system as the most common data cache scenario. The cache is local, which means that only the application can be used. When it works as these guidelines, the application block is the ideal solution for the following scenarios:
- Scenarios require the use of caching in a consistent form across different application environments. For example, by using a cached application block, developers can write similar code to perform caching of application components hosted in the environment of Internet Information Services (IIS), Enterprise Services, and smart clients. In addition, the same cache configuration options are available for all environments.
- Scenarios require a configurable and back-end storage persistence. The cached application block supports both isolated storage and database back-end storage. Developers can create additional back-end storage providers and add them to the configuration settings of the cached application block. Application blocks can also be encrypted before the data cache is saved to the back-end store.
- The scenario needs to change the configuration of the cache without needing to modify the application source code. The developer first writes code using one or more named caches. System operators and developers can set these named caches by using the Enterprise Library Configuration tool.
- The cache entry requires any of the following expiration settings: Absolute time, sliding time, Extended time format (for example, 10 o'clock midnight every night), file dependencies, or never expire. For more information about expiration settings, see the outdated design process for caching.
- The developer wants to modify the source code of the cached Application block. For more detailed information on how to modify the cached application block, see Modifying application design guidelines.
In addition, the cache application Block provides a consistent development pattern as the application block for other enterprise libraries. The cached application block integrates seamlessly with the application block for data access for back-end storage functionality. In the same way, the security application block, including the caching of the Application block, provides the ability to cache. Developers and operators use application blocks configured using the Enterprise Library Configuration tool.
Replace the application block using the cache
When there are multiple applications that need to be used, for example, when you cannot synchronize the cache through a Web farm. However, if you need to fundamentally change the behavior of the application block, you can do so by customizing the class to replace the CacheManager class.
developing applications using application blocks
Enter Cache configuration Information
These procedures explain how to configure the cache application block. The properties associated with the node are displayed in the right panel. If you want to use the data Access Application Block as a back-end store, you must configure the application block before you configure the cache application block. Add an Application Block
- Open the configuration file for more information, see Configuring the Application Block.
- Right-click Application Configuration, point to New, and then click Cachingapplicationblock.
- The configuration console automatically adds CacheManager nodes with default settings.
Configuring the Cache Manager (cached manager)
- Click the Caching Application Block node.
- Optionally, modify the Defaultcachemanager property name. If the code does not specify a specific cache manager, the default cache manager is used. Enter a new name or select one from the drop-down list. The default name is CacheManager.
- Click the CacheManager node (if the cache manager has been renamed, the node will have the name given).
- Optionally, set the Expirationpollfrequencyinseconds property. This is the frequency at which the timer controls how often the background scheduler checks for expired entries. The unit is seconds, the minimum time is 1 seconds, and the default is 60 seconds.
- Sets the Maximumelementsincachebeforescavenging property. This is the maximum number of elements that can be cached before purging. The default setting is 1000 elements.
- Optionally, rename the CacheManager node. The default name is CacheManager.
- Sets the Numbertoremovewhenscavenging property. This is the number of elements removed after the purge begins, with the default setting of 10 elements.
By default, the cache store entry is only in memory and gives the back-end store a value of Nullbackingstore. You can configure the cache application block to use database cache storage, isolated storage, or custom storage. The database Cache store uses the data Access Application block.
Configure the cache application block for database Cache storage
- Right-click CacheManager (or rename the Cache manager name), point to New, and then click Databasecachestorage.
- The configuration console automatically adds a data access Application block. For information on configuring the data Access Application block, see the documentation for the data Access Application block.
- Click the Datacachestorage node.
- Sets the Databaseinstance property. This is the name of the database connection string, which must correspond to the name of a connection string in the data Access Application Block configuration. You can enter a name or select it from the drop-down list.
- Optionally, use the rename datacachestorage node to set the Name property.
- Sets the PartitionName property. This identifies the portion of the database that the cache manager will use.
Configure the cache application Block for isolated storage
- Right-click CacheManager (or rename the Cache manager name), point to New, and then click Isolated Storage.
- If you want to encrypt the information saved in isolated storage, right-click Isolated Storage, point to New, and click Symmetric Storage encryption. Configure the console to automatically add the encryption Application block. For more information about configuring cryptographic application blocks, see the documentation for the cryptographic Application block.
- Optionally, rename the IsolatedStorage node to set the Name property.
- Sets the PartitionName property. This identifies the isolated storage area that the cache manager will use.
Configure the cache application Block for custom cache storage
- Right-click CacheManager (or rename the Cache manager name), point to New, and then click Custom cachestorage.
- In the Attributes Properties section of the right panel, click the ellipsis button (...).
- In the Editablekeyvaluecollectioneditor dialog box, click Add to add a new name/value pair.
- In the right panel of the Editablekeyvaluecollectioneditor dialog box, enter a value for the key name and property.
- Add more name/value pairs as appropriate, and then click OK.
- Optionally, in the Name attribute section of the right panel placed in the configuration console, modify the names of the custom cache stores. The default name is Cachestorage.
- In the right panel, in the Type attribute section, click the ellipsis button. If the type that you want is not contained in the Assembly folder, click LoadAssembly on Typeselector to find the assembly that contains the type that you want.
If you want to add another cache, a manager instance, right-click the Cachemanagers node, point to New, and then click CacheManager, repeat the previous steps. There can be only one default cache manager, and each cache manager must have a unique name. The configuration settings for using the description Cache Application Block will affect the application's cache usage pattern and its system environment, such as the amount of memory available. For example, if an application adds a much larger number of entries to the cache than when the cache is removed (this is a configuration setting), the cache will continue to grow. Over time, this leads to low memory. Use the performance counters of the application block to assist in adjusting configuration settings for each application.
Add application code
The cache Application Block is designed to support the vast majority of storage data into the cache. When adding application code, it involves a scenario in a critical scenario from which to choose the most appropriate scenario. Use the code along with the scene or modify it as needed.
Preparing the application
- Add a reference to the cache application block. In Visual Studio, right-click the project in Solution Manager, and then click Add Reference. Click the Browse tab to find the location of the Microsoft.Practices.EnterpriseLibrary.Caching.dll assembly. Select the assembly, and then click OK to add the reference.
- Use the same procedure to set a reference to the Enterprise Library generic assembly, named Microsoft.Practices.EnterpriseLibrary.Common.dll.
- Use the same procedure to set a reference to the Enterprise Library generic assembly, Microsoft.Practices.EnterpriseLibrary.Common.dll, and a reference to the Objectbuiler assembly, Microsoft.Practices.EnterpriseLibrary.ObjectBuilder2.dll.
- If you are using database back-end storage, add the Microsoft.Practices.EnterpriseLibrary.Caching.Database.dll and A reference to the Microsoft.Practices.EnterpriseLibrary.Data.dll.
- If you use the encryption application block to encrypt the data in the cache, add the Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll and A reference to the Microsoft.Practices.EnterpriseLibrary.Caching.Cryptography.dll.
- Optionally, add the following using statement (C #) to the top of the source code, using the element from the cache application Block without a fully qualified element reference.
Next, add the application code. You typically use two steps to create code that uses the cache application Block:
- Creates a CacheManager object.
- Call the appropriate method.
Each key scenario demonstrates how to incorporate these steps into the application.
Select Back-End Storage
Each cache manager can be configured to save data only in memory, which means that it is using an empty backend store, or that it is configured to save data both in memory and in persistent storage. The type of persistent storage is specified when the backend store is configured. Back-end storage allows cached data to be spared when the application must be restarted. In its initial state, the cache application Block supports two types of persistent back-end storage, each of which applies to a specific situation:
- Isolated Storage
- Database Cache Storage
Developers can extend the cache application block to support other backend storage types, and for more information on this topic, see Adding a new back-end store.
using NULL backend storage for empty back-end storage is the default choice for configuring cache app blocks. It does not persist cached entries, which means that the cached data is stored only in memory, not in persistent storage. Empty back-end storage is for situations where you want to flush cached entries from the original data source when the application restarts. It can be used for all supported application types, such as a list of these types, see the Cache Application Block introduction. back-end storage isolated storage using isolated storage is appropriate for the following scenarios:
- Requires persistent storage and low user volume
- The cost of using a database is very high
- No database
For more information about when to use isolated storage, see Scenarios for Isolated on MSDN Storage When you configure use isolated storage, the backend storage is isolated by the cache instance name, user name, assembly, and application domain. Isolated storage applies to smart clients and each application domain has its own cache of server programs. Also note that because isolated storage is always isolated by the user, the server application must impersonate the user requesting the application. Use the data Access Application Block back-end store to use the data Access Application Block backend store to allow storage of cached data into a database. The cache Application Block now contains a script that creates the required SQL Server database schema, and the application block has tested the corresponding SQL Server database. Developers can use other types of databases as back-end storage, but they must modify the source code of the Application block. Each database type must have a database provider for the data Access Application Block and include compatible patterns. The data Access Application Block back-end storage options apply to smart clients and to each application domain that has its own cached server application, as well as the situation in which to access the database. Each CacheManager that runs in a single application domain must use a different database partition, and a partition is defined as the combination of the application name and the cache instance name. The database can be run on the same server or on a different server as the application using the cache. The number of applications supported by the database that use caching depends only on the storage limits of the database.
server Scenarios Consider a single cache manager that cannot be shared across application domains. Server applications that are deployed on multiple computers have a unique memory cache copy on each computer, as well as multiple processes running on the same computer, including enterprise service components that run in their own processes and use the cache application block. Each process has its own memory cache copy. Different applications cannot use the same data Access Application Block back-end storage instances and partitions. Using the same database instance and partition to run different applications with the cache Application Block configuration will result in unpredictable results and is not recommended. When the same application runs in multiple processes (for example, if the application is deployed on multiple computers in a Web farm), you can configure the cache application block using one of the following three methods:
- All application instances use the same DB instance, but each application instance uses a different database partition. For more information, see scenario one later.
- All application instances Use the same database instance, the same database partition, and all cache managers can read and write caches. For more information, see Scenario Two, later in this section.
- All application instances Use the same database instance, the same database partition, but only one cache manager can write the cache. All cache managers can read from the cache. For more information, see Scenario Three, later in this section.
Scenario One: Cache of partitionsScenario one is where all application instances use the same DB instance, but each application instance uses a different database partition. In this scenario, the operations of each cache manager are independent. Although they share the same back-end storage db instance, each cache manager persistently caches the data to different partitions. At this point, each application instance has only one valid cache. When an application restarts, each cache manager loads its data from its own partition in the back-end store. If the application preloaded the cache, each deployed application instance obtains data from the original data source. Preloaded data uses the back-end storage space for each deployed application instance. This means that with caching, deploying the same application to multiple processes is no more efficient than deploying different applications. Deploying the same application to multiple servers, each configuration Application block for the server is configured identically (for example, all application blocks use the same expiration policy), and there is no guarantee that the data in each back-end storage partition is the same. The data in the back-end storage partition replicates the memory cache data that is configured to use the cache manager for the back-end storage partition. Content in the memory cache varies with the use of a specific instance of the cache application, because the application requires routing to a different server, so the memory cache in each server may be different, so the content in the back-end storage partition may be different. This means that even if all applications are shut down and restarted at the same time, there is no guarantee that the data in the memory cache will be the same in each cache after the data in the back-end storage is initialized.
Scenario Two: Shared partitionsScenario Two is that all instances of the application use the same DB instance and the same database partition, and all cache managers read and write the cache. In this scenario, each application instance operates on a mutually unique memory cache. When an application creates a cache manager, the cache manager puts the data in the back-end store into the in-memory cache, which means that if the application creates a cache manager when it starts, and if all application instances are started at the same time, each memory cache will load the same data. Because applications use the same partitions, each application instance does not require additional storage in the back-end storage. The time that the cache manager was created is the only time from the backend store to load the data into the memory cache. After this, the contents of the in-memory cache are determined by the application instance that uses the cache. Application instances can use caching differently because they need to be routed to different servers. Different instances of the application running can have memory caches of different content. As the application adds and deletes entries, the contents of the memory cache change, and the contents of the memory cache change when the cache manager removes or clears the expired entries. As the memory cache changes, cache management updates the back-end storage to reflect these changes. The back-end store does not notify the cache manager when its memory has changed. Therefore, when an application instance changes the contents of the backend store, the other application will have a memory cache that does not match the back-end storage data. This means that after the application restarts, the memory cache can have content that is not the same as before the application restarts. When an entry expires, the application can be notified by a commit event provided by the cache manager that the application can use to flush the cached data from the source data source. When an application adds a refreshed cache entry to the cache, the cache manager also updates the back-end storage with that data. If the application is deployed on more than one computer, each application instance receives the event and then initializes the request to the original data source for the same entry. These multiple requests can have a significant negative impact on the performance of the application and the raw data source. Therefore, it is not recommended to use notifications to monitor expiration for the purpose of refreshing expired cache entries in this scenario.
Scenario Three: Single WriteScenario three is that all applications use the same database instance, the same database partition, and only one cache manager can write the cache, and all cache managers can read from the cache. In this scenario, there is only one application instance write cache, and all other utility instances can only be read from the cache. The application instance that can write the cache is the host, and the host's memory cache has the same data as the back-end storage. The memory cache in each application instance is populated with data from the back-end store when the cache manager is created. Only application instances that are read from the cache get a snapshot of the data because the application instance does not have the ability to flush their caches, and their caches expire and shrink when the entries expire.
Ext: Enterprise Library 4.0 Cache Application Block