Ten Methods to Improve ASP.net Performance

Source: Internet
Author: User
Tags server memory

Today, I accidentally read an article about improving ASP. NET performance. I personally feel that I am not able to. I have sorted it out and shared it with you. I also hope that the experts will keep their own high opinions on improving performance! If you feel wrong, please forgive me and give me more advice.

 

1. Multiple datasets are returned.

Check your database access code to check whether there are multiple requests to be returned. Each round-trip reduces the number of times your application can respond to requests per second. By returning multiple result sets in a single database request, you can reduce the communication time with the database, make your system scalable, and reduce the workload of the database server to respond to requests.

If dynamic SQL statements are used to return multiple datasets, it is better to use stored procedures to replace dynamic SQL statements. Whether to write the business logic to the stored procedure is a bit controversial. However, in my opinion, writing business logic to a stored procedure can limit the size of the returned result set, reduce the traffic of network data, and do not need to filter data at the logic layer. This is a good thing.

Use the ExecuteReader method of the SqlCommand object to return a strong business object, and then call the NextResult method to move the dataset pointer to locate the dataset. Multiple ArrayList objects are returned. Only returning the data you need from the database can greatly reduce the memory consumed by your server.

Ii. Paging data

ASP. NET DataGrid has a very useful function: paging. If the DataGrid allows paging, it only downloads data on a certain page at a certain time. In addition, it has a data paging navigation bar, which allows you to choose to browse a certain page, in addition, only one page of data is downloaded at a time.

But it has a small drawback: You must bind all the data to the DataGrid. That is to say, your data layer must return all the data, and the DataGrid then filters out the data needed for the current page based on the current page. If there is a result set with 10 thousand records to be paged using the DataGrid, assuming that only 25 data entries are displayed on each page of the DataGrid, it means that 9975 data entries are discarded in each request. Each request must return such a large dataset, which has a very high impact on the performance of the application.

A good solution is to write a paging stored procedure, such as the paging Stored Procedure for the orders table of the Northwind database. You only need to pass the current page number. Each page displays two parameters. The stored procedure will return the corresponding results.

On the server side, I specifically wrote a paging control to process data pages. Here, I used the first method and returned two result sets in a stored procedure: total number of data records and Required result sets.

The total number of records returned depends on the query to be executed. For example, a where condition can limit the size of the returned result set. Because the total number of pages must be calculated based on the dataset record size on the page, the number of records in the result set must be returned. For example, if there are a total of 1000000 records, you can use the where condition to filter and return only 1000 records. The paging logic of the stored procedure should know the data to be displayed.

Iii. Connection Pool

Using TCP to connect your applications to the database is expensive (time-consuming). Microsoft developers can use the connection pool to repeatedly use the database connection. A TCP connection is created only when no valid connection exists in the connection pool. When a connection is closed, it will be put into the pool, and it will still maintain the connection with the database, which can reduce the number of TCP connections to the database.

Of course, you should pay attention to the connections that you forget to close. You should close them immediately after each connection is used up. I want to emphasize that no matter what people say. the GC (Garbage Collector) in the. net framework will call the Close or Dispose method of the connection object after you use up the connection object to explicitly Close your connection. Do not expect the CLR to close the connection within the expected time. Although the CLR will eventually destroy the object and close the EDGE connection, we cannot determine when it will actually do these things.

To optimize the connection pool, there are two rules. First, open the connection, process the data, and then close the connection. If you have to open or close the connection multiple times in each request, this is better than always opening an EDGE connection, and then passing it to each method. Second, use the same connection string (or use the same user ID when you use integrated authentication ). If you do not use the same connection string, for example, you use a connection string based on the login user, this will not take advantage of the connection pool optimization function. If you use the integration argument, you cannot make full use of the optimization function of the connection pool because there are many users .. Net clr provides a data performance counter, which is very useful when we need to track program performance characteristics, including connection pool tracking.

No matter when your application will be connected to resources on another machine, such as databases, you should focus on optimizing the time spent on resource connection, receiving and sending data, and the number of rounds. Optimize every process hop in your application. It is the starting point to improve the performance of your application.

The Application Layer connects to the data layer, transmits data to the corresponding class instance, and the business processing logic. For example, in Community Server, You need to assemble a Forums or Threads set and then apply the business logic, such as authorization. More importantly, you need to complete the cache logic here.

Iv. ASP. NETCache API

Before writing an application, the first thing you need to do is to allow the application to maximize the use of the ASP. NET cache function.

If your component is to run in the Asp.net application, you just need to reference System. Web. dll to your project. Then, use the HttpRuntime. Cache attribute to access the Cache (you can also access the Cache through Page. Cache or HttpContext. Cache ).

There are several rules for caching data. First, data may be frequently used and can be cached. Second, the Access frequency of data is very high, or the access frequency of a data is not high, but it has a long life cycle, so it is best to cache such data. The third is a problem that is often ignored. Sometimes we cache too much data, usually on an X86 host. If you want to cache more than MB of data, A memory overflow error occurs. Therefore, the cache is limited. In other words, you should estimate the size of the cache set and limit the size of the cache set to less than 10; otherwise, problems may occur. In Asp.net, if the cache is too large, a memory overflow error is reported, especially when the DataSet object is large.

There are several important caching mechanisms that you must understand. First, the cache implements the "recently used" principle (a least-recently-used algorithm). When the cache is small, it will automatically force the useless cache to be cleared. Next, the "Conditional dependency" principle is enforced (expiration dependencies). The condition can be time, keyword, and file. Time is the most common condition. Add a stronger condition in asp. net2.0, that is, the database condition. When the data in the database changes, the cache is forcibly cleared.

V. Pre-request Cache

Previously, we only made a small performance improvement for some places, which can also be greatly improved. It is very good to use the pre-request cache to improve the program performance.

Although the Cache API is designed to save data for a certain period of time, the pre-request Cache only stores the content of a request in a certain period of time. If a request is frequently accessed and only needs to be extracted, applied, modified, or updated once. The request can be pre-cached. Here is an example.

In BS Forum applications, the server control of each page requires custom data used to determine its skin (skin, to determine which style sheet to use and other personalized things. Some of the data here may need to be saved for a long time. For example, if the skin data of the control is not saved for some time, it only needs to be applied once and then can be used all the time.

To implement pre-request caching, use the HttpContext class of Asp.net. Instances of the HttpContext class are created in each request and can be accessed anywhere during the request through the HttpContext. Current attribute. The HttpContext class has an Items set attribute. During the request, all objects and data are added to this set for caching. Similar to the high-frequency data access through Cache, you can use HttpContext. Items to Cache the basic data that each request uses. The logic behind it is simple: we add a data to HttpContext. Items, and then read the data from it.

Vi. Background Processing

Through the above method, your application should run fast, right? However, in some cases, a request in a program may execute a very time-consuming task. Such as sending an email or checking the correctness of the submitted data.

When we integrated asp.net Forums 1.0 into CS, we found that it would be very slow to submit a new post. Each time a post is added, the application must first check whether the post is repeated, then use the "badword" filter to filter, check the image additional code, and index the post, add it to the appropriate queue, verify its attachments, and finally send an email to its subscriber mailbox. Obviously, this is a huge workload.

The result is that it spends a lot of time indexing and sending emails. Indexing a post is a very time-consuming operation. to send an email to a subscription, you must connect to the SMTP service and send an email to each subscriber. As subscription users increase, the email sending time will be longer.

Indexing and sending emails do not need to be triggered at each request. Ideally, we want to process these operations in batches, send only 25 emails at a time or send all new emails every 5 minutes. We decided to use the same code as the database prototype cache, but failed, so we had to return to VS. NET 2005.

We found the Timer class in the System. Threading namespace. This class is very useful, but few people know it, and fewer Web developers know it. Once an instance of this class is created, the Timer class calls the specified callback function from a thread in the thread pool at every specified time. This means that your asp.net application can run without a request. This is the post-processing solution. You can make indexing and sending emails run in the background, instead of being required for each request.

There are two problems with the technology running in the background. First, when your application domain is uninstalled, the Timer instance will stop running. That is, the callback method will not be called. In addition, because many threads are running in every process of CLR, it is difficult for Timer to obtain a thread to execute it, or to execute it, but it will be delayed. The Asp.net layer should use this technology as few as possible to reduce the number of threads in the process, or only allow requests to use a small number of threads. Of course, if you have a lot of asynchronous work, you can only use it.

VII. Page output cache and proxy service

Asp.net is your interface layer (or should be) that contains pages, user controls, server controls (HttpHandlers and HttpModules) and the content they generate. If you have an Asp.net page to output html, xml, imgae, or other data, you can use code to generate the same output content for each request, you need to consider using the page output cache.

Simply copy the following code to your page:

<% @ PageOutputCache VaryByParams = "none" Duration = "60" %>

You can use the page output cache content generated in the first request to generate a new page content 60 seconds later. This technology is actually implemented using some low-layer Cache APIs. Several parameters can be configured in the page output cache. As mentioned above, the VaryByParams parameter indicates when to trigger the re-output condition, you can also specify the cache output in Http Get or Http Post request mode. For example, when we set this parameter to VaryByParams = "Report", default. aspx? Report = 1 or default. aspx? The output of Report = 2 requests will be cached. The parameter values can be separated by semicolons.

Many people do not realize that when pages are output to the cache, asp.net will also generate an HTTP Header set (HTTP Header) and save it to the downstream cache server, this information can be used for Microsoft Internet Security and to accelerate the server response speed. When the HTTP cache header is reset, the requested content is slowed down in network resources. When the client requests the content again, it will no longer obtain the content from the source server, the content is directly obtained from the cache.

Although page output cache does not improve the performance of your application, it can reduce the number of times cached page content is loaded from the server. Of course, this is limited to pages that can be accessed by anonymous users. Once the page is cached, authorization cannot be performed.

8,IIS6.0Kernel Caching

If your application does not run in IIS6.0 (windows server 2003), you will lose some good ways to improve application performance. In the seventh method, I talked about how to use the page output cache to improve the performance of the application. In IIS5.0, when a request arrives at IIS, IIS transfers it to asp.net. When the page output cache is applied, ASP.. NET HttpHandler will receive this request, and HttpHandler will extract the content from the cache and return it.

If you are using IIS6.0, it has a very good function: Kernel Caching, and you do not have to modify any code in the asp.net program. When asp.net receives a cached request, the IIS Kernel Cache will get a copy of it from the Cache. When a request is sent from the network, the Kernel layer will get the request. If the request is cached, the cached data will be directly returned, which is complete. This means that when you use the IIS Kernel Caching to cache page output, you will get an incredible performance improvement. In development. NET 2005 asp.net has one point: I am a program manager dedicated to negative asp.net performance. My programmers used this method and I read all the daily report data, it is found that the result of using kernel model caching is always the fastest. One of their common characteristics is that the number of requests and responses on the network is large, but IIS only occupies 5% of the CPU resources. This is amazing. There are many reasons for using IIS6.0, but kernel cashing is the best one.

IX,Use GzipCompressed Data

Unless your CPU usage is too high, it is necessary to use techniques to improve server performance. Using gzip to compress data can reduce the amount of data you send to the server, increase the page running speed, and reduce network traffic. How to better compress data depends on the data you want to send. In addition, the browser support of the client is not supported (IIS sends the data compressed with gzip to the client, and the client must support gzip before parsing, both IE6.0 and Firefox are supported ). In this way, your server can respond to more requests per second. Similarly, you can reduce the amount of data sent to the response and send more requests.

Good news: gzip compression has been integrated into IIS6.0, which is better than gzip in IIS5.0. Unfortunately, gzip compression is enabled in IIS6.0, and you cannot set it in the IIS6.0 attribute dialog. The IIS Development Team developed the gzip compression function, but they forgot to enable it easily in the administrator window. To enable gzip compression, you can only modify its configuration in the IIS6.0 xml configuration file.

In addition to reading this article, let's take a look at the IIS6 Compression article written by Brad Wilson: paiaspx Compression in IIS. However, in IIS6, dynamic compression and kernel cashing are mutually exclusive.

10,ViewState of the server control

ViewState is a feature of asp.net. It is used to save a state value used to generate a page in a hidden domain. When the page is uploaded back to the server, the server needs to parse, verify, and apply the data in ViewState to restore the control tree of the page. ViewState is a very useful feature that can persist the client state without the cookie or server memory. Most server controls use ViewState to persist the status values of elements that interact with users on the page. For example, it is used to save the page number of the current page used for paging.

ViewState may have some negative effects. First, it increases the server response and request time. Second, the serialization and deserialization of data are added each time the data is returned. Finally, it consumes more memory on the server.

Many server controls tend to use ViewState, such as DataGrid, but sometimes they are not required. ViewState is allowed by default. If you do not want to use ViewState, you can disable it at the control or page level. In the control, you only need to set the EnableViewState attribute to False. You can also set it in the page to extend its range to the entire page: <% @ Page EnableViewState = "false" %> If the Page does not need to be returned or each request is sent, the Page only displays the control. You should turn off ViewState at the page level.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.