Ten ways to improve the performance of your ASP

Source: Internet
Author: User

One, return more than one data set

Check the code of your Access database to see if there are requests to return multiple times. Each round-trip reduces the number of times that your application can respond to requests per second. By returning multiple result sets in a single database request, you can reduce the time it takes to communicate with the database, make your system extensible, and reduce the workload of the database server in responding to requests.

If you are using dynamic SQL statements to return multiple datasets, I recommend that you use stored procedures instead of dynamic SQL statements. It is somewhat controversial to write business logic into the stored procedure. But I think it's a good thing to write the business logic into a stored procedure that can limit the size of the returned result set, reduce the traffic on the network data, and not filter the data at the logical level.

Returns a strongly typed business object with the ExecuteReader method of the SqlCommand object, and then calls the NextResult method to move the dataset pointer to locate the dataset. Example one shows an example that returns multiple ArrayList strongly typed objects. Simply returning the data you need from the database can greatly reduce the memory consumed by your server.

  Second, paging the data

Asp. NET's DataGrid has a very useful function: paging. If the DataGrid allows paging, it downloads only one page of data at a time, and it has a data-paging navigation bar that lets you choose to browse a page and download only one page of data at a time.

But it has a small drawback that you have to bind all the data to the DataGrid. That is, your data layer must return all the data, and the DataGrid will then display the data needed to filter out the current page according to the current page. If there is a result set of 10,000 records to be paged with the DataGrid, assuming that the DataGrid displays only 25 data per page, it means that 9,975 data is discarded for each request. This large data set is returned for each request, and the performance impact on the application is significant.

A good solution is to write a paging stored procedure, example 2 is a paging stored procedure for the Northwind database Orders table. You only need to pass the current page number, the number of bars displayed per page two parameters come in, the stored procedure will return the corresponding results.

On the server side, I specifically wrote a paging control to handle the paging of the data, where I used the first method to return two result sets in a stored procedure: the total number of data records and the required result set.

The total number of records returned depends on the query to be executed, for example, a where condition can limit the size of the returned result set. Because the total number of pages must be calculated based on the size of the dataset record in the paging interface, you must return the count of records for the result set. For example, if you have a total of 1 million records, and if you can filter to return only 1000 records with a where condition, the paging logic of the stored procedure should know to return the data that needs to be displayed.

  Third, Connection pool

Using TCP to connect your application with the database is an expensive thing (time consuming), and Microsoft developers can use the connection pools repeatedly to connect to the database. The connection pool creates a new TCP connection only if there is no valid connection when the database is connected to each request with TCP. When you close a connection, it is placed in the pool, and it remains connected to the database, which reduces the number of TCP connections to the database.

Of course, you should pay attention to those forgot to close the connection, you should close it immediately after each use of the connection. I would like to emphasize: no matter what people say the GC (garbage collector) in the. NET Framework will always call the close or Dispose method of the connection object to explicitly close your connection after you run out of connection objects. Don't expect the CLR to turn off the connection in the time you imagine, although the CLR eventually destroys the object and closes the edge, but we're not sure exactly when it's going to do it.

To use connection pooling optimization, there are two rules, first, open the connection, process the data, and then close the connection. If you have to open or close the connection multiple times in each request, it's better to always open a side link and then upload it to each method. Second, use the same connection string (or use the same user ID when you use integrated authentication). If you do not use the same connection string, such as you use the connection string based on the logged on user, this will not take advantage of the connection pooling optimization feature. If you are using an integrated argument because there are a lot of users, you can't take advantage of the connection pooling optimization. The. NET CLR provides a data performance counter that is useful when we need to track program performance characteristics, and of course the connection pool traces.

No matter when your application is connected to a resource in another machine, such as a database, you should focus on optimizing the time you spend with your resources, the time it takes to receive and send data, and the number of times you will return. Optimize every processing point in your application (process hop), which is the starting point for improving the performance of your application.

The application layer contains a connection to the data layer, an instance of the data transfer to the appropriate class, and the logic of the business process. For example, in community server, to assemble a forums or threads collection, and then apply the business logic, such as authorization, more importantly, the caching logic is done here.

  Iv. ASP. NET Cache API

The first thing you need to do before you write your application is to maximize the use of ASP.

If your component is to run in an ASP. NET application, you can simply refer the System.Web.dll to your project. The cache can then be accessed using the Httpruntime.cache property (also accessed via Page.cache or Httpcontext.cache).

There are several rules for caching data. First, data may be used frequently, and this data can be cached. Second, the frequency of access to data is very high, or the frequency of access to a data is not high, but it has a long life cycle, such data is best to cache. The third is a frequently overlooked problem, sometimes we cache too much data, usually on a X86 machine, if you want to cache more than 800M of data, there will be a memory overflow error. So the cache is limited. In other words, you should estimate the size of the cache set, limit the size of the cache set to less than 10, or it may cause problems. In ASP., a memory overflow error is reported if the cache is too large, especially if the large dataset object is cached.

Here are a few important caching mechanisms that you must understand. The first is that the cache implements the "most recently used" principle (a least-recently-used algorithm), which automatically enforces the removal of useless caches when the cache is low. Second, the conditional dependency enforcement purge principle (expiration dependencies), which can be time, keywords, and files. Taking time as a condition is the most common. Adding a stronger condition to the asp.net2.0 is the database condition. When the data in the database changes, the cache is forced to clear. To gain a deeper understanding of database condition dependencies, see Dino Esposito in the July 2004 issue of the MSDN Magazine's Cutting Edge column. The cache schema for ASP. NET is as follows:
  V. Pre-Request caching

In front of me, I mentioned that even if we only made a small performance improvement in some places, we could get a big performance boost, and I like to use pre-request caching to improve the performance of the program.

While the cache API is designed to hold data for a certain period of time, the pre-request cache simply saves the content of a request for a certain age. If a request is accessed at a high frequency and the request only needs to be extracted, applied, modified, or updated once. Then you can pre-cache the request. Let's give an example to illustrate.

In the CS forum application, each page of the server control requires a custom data to be used to determine its skin (skins) to decide which style sheet to use and some of its personalization stuff. Some of this data may take a long time to save, and some time otherwise, such as the skin data of the control, it only needs to be applied once, and then it can be used.

To implement a pre-request cache, an instance of the HttpContext class is created in each request using the ASP. HttpContext class, and can be accessed anywhere in the request via the HttpContext.Current property. The HttpContext class has an Items collection property, and all objects and data are added to the collection during the request cache. You can use Httpcontext.items to cache the underlying data that each request uses, just as you would use cache to access high-frequency data. The logic behind it is simple: we add a data to the Httpcontext.items and then read the data from it.

  Vi. Background processing

Your application should run fast with the above method, isn't it? At some point, however, a very time-consuming task may be performed in a request in the program. such as sending a message or checking the correctness of the submitted data.

When we put the ASP. Forums 1 into CS, we found that it was very slow to submit a new post. Each time you add a post, the application first check that the post is not repeated, and then use the "BadWord" Filter to filter, check the image attached to the index, add it to the appropriate queue, verify its attachment, and finally, send an email to its subscribers mail box. Obviously, this is a lot of work.

The result is that it spends a lot of time indexing and sending messages. Indexing a post is a time-consuming operation, and sending a message to a subscription requires a connection to the SMTP service, which then sends a message to each subscriber, and the message takes longer to send as the subscriber increases.

Indexing and sending emails do not need to be triggered on each request, ideally we want to process these operations in batches, sending only 25 messages at a time or sending all new messages every 5 minutes. We decided to use the same code as the database prototype cache, but failed, so we had to go back to Vs.net 2005.

We found the Timer class under the System.Threading namespace, which is very useful, but few people know that there are fewer web developers. Once he has built an instance of the class, every other specified time, the timer class invokes the specified callback function from one thread in the thread pool. This means that your ASP. NET application can be run without a request. This is the solution to the post-processing. You can make indexing and email work run in the background, rather than having to do it every time you request it.

There are two problems with the technology behind the scenes, the first is that when your application domain is uninstalled, the Timer class instance stops running. That is, the callback method is not invoked. Also, because there are a lot of threads running in each of the CLR's processes, it's very difficult for a timer to get a thread to execute it, or to execute it, but it will be delayed. The ASP. NET layer should use this technique sparingly to reduce the number of threads in the process, or to allow only a small portion of the thread to be requested. Of course, if you have a lot of asynchronous work, then you can only use it.

You can download the sample program from http://www.rob-howard.net/, please download the sample program for Blackbelt TechEd 2004.

  Vii. page output caching and proxy services

ASP. NET is your interface layer (or should be), which contains pages, user controls, server controls (HttpHandlers and httpmodules), and what they generate. If you have an ASP. NET page to output html,xml,imgae or other data, and you use code to generate the same output for each request, you need to consider using the page output cache.

You simply copy the following line of code into your page can be achieved: <%@ Pageoutputcache varybyparams= "None" duration= "%>" You can effectively make use of the page output cache content generated in the first request and regenerate a page content after 60 seconds. This technique is actually implemented using some low-level cache APIs. There are several parameters that can be configured with the page output cache, such as the VaryByParams parameter described above, which indicates when the re-export condition is triggered, or whether the output is cached in HTTP GET or HTTP Post request mode. For example, when we set this parameter to varybyparams= "Report", Default.aspx? Report=1 or default.aspx? The output from the report=2 request will be cached. The value of the parameter can be more than one semicolon-separated argument.

Many people don't realize that when you use page output caching, ASP. NET also generates an HTTP header set (HTTP header) that is stored in a downstream cache server that can be used in Microsoft Internet Security and speed up the responsiveness of the server. When the header of the HTTP cache is reset, the requested content is slowed down in the network resource, and when the client requests the content again, it will no longer get the content from the source server and get the content directly from the cache.

Although using page output caching does not improve your application performance, it can reduce the number of cached page content loaded from the server. Of course, this is limited to caching pages that anonymous users can access. Because once the page is cached, it is no longer possible to perform authorization operations.

  Eight, with IIS6.0 's kernel Caching

If your application doesn't work in IIS6.0 (Windows Server 2003), then you've lost some good ways to improve your application's performance. In the seventh approach, I talked about ways to improve the performance of your application with page output caching. In IIS5.0, when a request arrives at IIS, IIS forwards it to ASP. When the page output cache is applied, the HttpHandler in ASP. NET receives the request, HttpHandler pulls the content out of the cache and returns it.

If you are using IIS6.0, it has a very good function of kernel Caching, and you do not have to modify any code in the ASP. When ASP. NET receives a cached request, IIS's kernel cache gets a copy of it from the cache. When a request is sent from the network, the kernel layer gets the request, and if the request is cached, the cached data is returned directly, so it is finished. This means that when you use IIS's kernel caching to cache page output, you get an incredible performance boost. In the development of ASP. Vs.net 2005, I was a program manager with negative ASP. NET performance, my programmer used this method, I looked at all the daily table data and found that the result of using kernel model caching is always the fastest. One common feature of the network is the large number of requests and responses, but IIS consumes only 5% of the CPU resources. This is amazing. There are many reasons for you to use IIS6.0, but kernel cashing is the best one.

  Ix. compressing data with gzip

Unless your CPU usage is too high, it is necessary to use the technique of improving server performance. The method of compressing data with gzip can reduce the amount of data you send to the server, also can improve the speed of the page, but also reduce the network traffic. How to better compress the data depends on the data you want to send, there is the client's browser support (IIS will use gzip compressed data sent to the client, the client to support gzip to resolve, IE6.0 and Firefox support). This way your server can respond to requests more than once per second, and you can also send more requests by reducing the amount of data that is sent to the response.

The good news is that gzip compression has been integrated in IIS6.0, which is better than gzip in IIS5.0. Unfortunately, gzip compression is enabled in IIS6.0 and you cannot set it in the IIS6.0 properties dialog. The IIS development team developed the GZIP compression feature, but they forgot to make it easy for administrators to enable it in the Administrator window. To enable gzip compression, you can only modify its configuration in the IIS6.0 XML configuration file.

In addition to reading this article, I had to look at Brad Wilson's IIS6 compression article: http://www.dotnetdevs.com/articles/ Iis6compression.aspx; There is also an article about ASPX compression basics, Enable aspx Compression in IIS. However, it is important to note that dynamic compression and kernel cashing are mutually exclusive in IIS6.

  X. ViewState of server controls

ViewState is an attribute in ASP., which is used to store a state value for a generated page in a hidden field. When the page is posted back to the server, the server resolves, verifies, and applies the data in the ViewState to restore the control tree of the page. ViewState is a very useful feature that can persist the state of a client without a cookie or server's memory. Most server controls use ViewState to persist the state values of the elements that interact with the user in the page. For example, to save the page number of the current page that is used for paging.

The use of ViewState will bring some negative effects. First, it increases the responsiveness of the server and the time of the request. Second, the time to serialize and deserialize the data is increased each time the callback is made. Finally, it consumes more memory from the server.

Many server controls tend to use ViewState, such as the well-known DataGrid, and sometimes there is no need to use it. By default, ViewState is allowed, and if you don't want to use ViewState, you can turn it off at the control or page level. In the control, you just set the EnableViewState property to False, you can also set it in the page to extend its scope to the entire page: <%@ page enableviewstate= "false"%> If the page does not need to be returned or the page is only rendered control each time. You should turn viewstate off at the page level.

Ten ways to improve the performance of your ASP

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.