Using session and Application objects in ASP.net (cont.)

Source: Internet
Author: User
Tags empty session id naming convention
Application|asp.net|session| Object Figure 3 Caching

The amount of data you can reach at any level is different, but the right doses are determined on a per-application basis.
Also different from layer to layer are the time needed to retrieve data. Session, in most cases, are an in-process and In-memory object. Nothing could to be faster. Keeping session lean are critical because it is duplicated for each connected user. For quick access to data this can is shared between users, the nothing is better than Cache or application. The Cache is faster and provides for automatic decay and prioritization. Relatively large amounts of frequently used static data can be effectively stored.
Disk files serve as an emergency copy of data. Use them when you don ' t need or can ' t afford to keep all of the data in memory, but when going to the ' database is too costly. Finally, the DBMS views are just like virtual tables which is represent the data from one or more tables in a alternative way. Views are normally used for read-only data, but under, certain conditions, can they.
Views can also being used as a security mechanism to restrict the data that a certain user can access. For example, some data can is available to users for query and/or update purposes, while the rest of the table remains INV Isible. and table views can constitute a intermediate storage for preprocessed or post-processed data. Therefore, accessing a view has the same effect for the application, but doesn ' t cause preprocessing/place any L Ocks on the physical table.

XML server-side Data Islands
Caching is particularly useful at have a large amount of data to load. However, when the amount of the ' data is ' really huge, any technique-either on the client or the server-can hardly to be optimal. When you have one of the million records to fetch, your ' re out of the luck. In such situations, you can reduce the impact of the data bulk by using a layered to architecture for caching by bringing the Concept of client-side data islands to the server. An XML data island are is a blocks of XML this is embedded in HTML and can to be retrieved through the page ' s DOM. They ' re good at storing read-only information on the client, saving round-trips.
Used on the server, an XML data island becomes a persistent bulk of information to you can store in memory, or (for scal Ability) on disk. But, how did you read it back? Typically, in. NET with would use DataSet XML facilities to read and write. For lots of data (say, one million records), caching the This way are not effective if you don ' t need all records in memory. Keeping the records in a single file makes it heavier for the system. What about splitting records into different XML files, are organized like those in Figure 4? This is expands of XML disk files shown in Figure 3.


Figure 4 dividing Records for performance

You can builds up an extensible tree of XML files, each representing a page of database records. Each need a blocks of non-cached records, you are fetch them from the database and add them to a new or existing XML D ATA Island. Would use a special naming convention to distinguish files on a per-session basis, for example, by appending a progres Sive the session ID of the index to. An index file can help you locate the right data island where a piece of the data is cached. For really huge bulks of data, this minimizes the processing on all tiers. However, with one million records to manage there is no perfect tool or approach.

Automatic Cache bubble-up
Once you have a layered caching system, how do you move data from one tier to the next are up to you. However, ASP.net provides a facility that can involve both a disk file and a Cache object. The Cache object works like a application-wide repository for data and objects. Cache looks quite different from the plain old Application object. For one thing, it are thread-safe and does not require locks on the repository prior to reading or writing.
Some of the items stored in the Cache can is bound to the timestamp of a one or more files or directories as as a ARRA Y of other cached items. When any of the cached object becomes obsolete and are removed from the cache. By using a proper try/catch blocks you can catch the empty item and refresh the cache.

String strfile;
strfile = Server.MapPath (Session.SessionID + ". xml");
CacheDependency fd = new CacheDependency (strfile);
DataSet ds = Deserializeddatasource ();
Cache.Insert ("myDataSet", DS, FD);

To help the scavenging routines of the cache object, can assign some of your Cache items with a priority and even a de Cay factor that lowers the priority of the keys, which have limited use. When working with the Cache object, your should never assume that a item is there while you need it. Always is ready to handle a exception due to null or invalid values. If your application needs to is notified of an item ' s removal, then register for the cache ' s OnRemove event by creating an Instance of the CacheItemRemovedCallback delegate and passing it to the Cache ' s inserts or Add method.

CacheItemRemovedCallback onremove = new
CacheItemRemovedCallback (dosomething);

The signature of the event handler looks like this:

void DoSomething (String key, Object value,
CacheItemRemovedReason reason)

From the DataSet to XML
When stored in memory, the ' DataSet is represented through a custom binary-like any. NET class. Each and every the data row are bound with two arrays:one for the "current value" and one for the original value. The dataset is not kept in memory as XML, but XML are used for output when the dataset is remoted through app domains and n Etworks or serialized to disk. The XML representation of a DataSet object is based on diffgrams-a subset of the SQL server™2000 updategrams. It is a optimized XML schema that describes changes the object has undergone since it being created or since the last time Changes were committed.
If the Dataset-or any contained DataTable and DataRow Object-has no changes pending, then the XML representation is a desc Ription of the child tables. If There are changes pending, then the remoted and serialized XML representation of the DataSet is the DiffGram. The structure of a DiffGram is shown in Figure 5. It is based on two nodes, <before> and <after>. A <before> node describes the original state of the record while <after> exposes the contents of the modified Record. A empty <before> node means the record has been added and a empty <after> node means the node has been Ed.
The method so returns the current XML format is GETXML, which returns a string. WriteXml saves the content to a stream while ReadXml rebuilds a living instance of the DataSet object. If you are want to save a DataSet to XML, use WriteXml directly (instead of getting the text through GETXML) then save using F Ile classes. When using WriteXml and READXML, the can control how the data is written and read. You can choose between the DiffGram and the basic format and decide if the schema information should is saved or not.

Working with Paged Data Sources
There is a subtler reason this makes caching vital in asp.net. ASP.net relies heavily on postback events, then when posted the "Server for Update", any page must rebuild a Consisten T state. Each control saves a portion of the IT internal state to the page's view state bag. This information travels back and forth as part of the HTML. asp.net can restore this information then the postback event is processed on the WEB server. But What about the rest? Let's consider the DataGrid control.
The DataGrid gets its contents through the DataSource property. In most cases, this content is a DataTable. The grid control does not store this potentially large blocks of data to the page ' s view bag. So, your need to retrieve the DataTable for each time a postback event fires, and whenever a new grid page are requested per vie W. If you don ' t cache data, "re at risk." You are repeatedly download the Data-say, hundreds of records-just to display the few. If data is cached, your significantly reduce this overhead. This said, custom paging are probably the optimal approach for improving the overall performance of pagination. I covered the DataGrid custom paging in the April 2001 issue. Although that code is based on Beta 1, the key points apply. I ' ll review some of them here.
To enable custom pagination, your must set both the AllowPaging and AllowCustomPaging properties to True. Can do that declaratively or programmatically. Next, you arrange your the code for pagination as usual and define the proper event handler for PageIndexChanged. The difference between custom and default pagination for-a DataGrid control was that the-custom paging is enabled, the con Trol assumes that all the elements currently stored into its Items collection-the content of the object bound to the Datasou Rce Property-are part of the current page. It does not even attempt to extract a subset of records of the page index and the page size. With custom paging, the programmer is responsible to providing the right content when a new page is requested. Once again, caching improves performance and scalability. The caching architecture is mostly application-specific, but I consider caching with custom pagination for a vital VEN application.

Data Readers
To gain scalability I ' d always consider caching. However, there might is circumstances (such as highly volatile tables) in which project requirements leads you to consider Alternative approaches. If you have an opt for getting data each time you need it, then your should use the DataReader classes of instead. A DataReader class is filled and returned by command classes like SqlCommand and OleDbCommand. DataReaders act like read-only, firehose cursors. They work connected, and to is lightweight They never cache a single byte of data. DataReader classes are extremely lean and are ideal for reading small. Starting with Beta 2, a DataReader object can is assigned to the DataSource property of a DataGrid, or to any data-bound C Ontrol.
By combining DataReaders with the grid ' s custom pagination, and both with a appropriate query command that loads only the Necessary portions of records for a given page, can obtain a good mixes that enhances scalability and performance. Figure 6 illustrates some C # asp.net code that uses custom pagination and data readers.
As mentioned earlier, a DataReader works with connected, and while the reader is open, the attached connection results in Busy. It's clear that's the the price to pay for getting up-to-date rows and to keep the Web server's memory free. To avoid the overturn of the expected results, the connection must is released as soon as possible. This can happen only if code it explicitly. The procedure that performs data access ends as follows:

Conn. Open ();
Dr = cmd. ExecuteReader (commandbehavior.closeconnection);
Return Dr;

You are open the connection, execute the command, and return an open DataReader object. When the "grid is going" to "a" new page, the code looks like this:

Grid. DataSource = CreateDataSource (grid. CurrentPageIndex);
Grid. DataBind ();
Dr. Close ();

Once the grid has been refreshed (DataBind does that), explicitly closing the "reader is" key, not only to preserve Scalabil ity, but also to prevent the application ' s collapse. Under normal conditions, closing the DataReader does not guarantee and that the connection would be closed. So does that explicitly through the connection ' s close or the Dispose of method. You are could synchronize reader and connection by assigning the reader a particular command behavior, like so:

Dr = cmd. ExecuteReader (commandbehavior.closeconnection);

In this way, the reader enables a internal flag that automatically leads to closing the associated the RE Ader itself gets closed.

SQL statements
The standards of the SQL language does not provide special support for pagination. Records can is retrieved only by condition and according to the values of their fields, not based on absolute or relative Positions. Retrieving records by Position-for example, the second group of records in a sorted table-can is simulated in various Ways. For instance, your could use the existing or custom field that contains a regular series of values (such as 1-2-3-4) and Gua Rantee its content to stay consistent across deletions and updates. Alternatively, you could use a stored procedure made of a sequence of SELECT statements that, through sorting and Temporar Y tables, reduces the number of records returned from a particular subset. This is outlined pseudo SQL:

-first N records are, in reverse order, what you need
SELECT into TMP top page*size field_names
From table order by Field_name DESC
-only the "size" records are, in reverse order,
-copied in a temp table
SELECT into TMP1 top size field_names from TMP
-the records are reversed and returned
SELECT field_names from TMP1 to Field_name DESC

You could also consider T-SQL cursors for this, but normally server cursors are the option to choose if you have no othe R option left. The previous SQL code could be optimized TODO without temporary tables which, in a session-oriented scenario, could creat E Serious management issues as you have to continuously create and destroy them while ensuring unique names.
More efficient SQL can is written if you are omit the requirement of performing random access to a given page. If you are allow only moving to the next or previous page, and assume to know the last and the Then the SQL code is simpler and faster.

Conclusion
Caching is already a key technique in ASP, but it's even more important in asp.net-not just because the asp.net provides Bett Er infrastructural support for it, but because of the architecture of the Web Forms model. A lot of natural postback events, along with a programming style this transmits a false sense of total statefulness, could The lead is for the bad design choices like repeatedly reloading the whole DataSet just to show a refreshed page. To make design even trickier, many examples apply programming styles that are only safe in applications whose Goa L is isn't directly concerned with pagination or caching.
The take-home message is this you should always try to cache data on the server. The session object has been significantly improved with asp.net and tuned to work in most common. In addition, the Cache object provides your with a flexible, dynamic, and efficient caching mechanism. And remember, if you can ' t afford caching, custom paging are a good way to improve your.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.