ASP. NET performance skills

Source: Internet
Author: User
Tags configuration settings classic asp server memory

Using ASP. NET to write Web applications is incredibly easy. It is so easy that many developers don't have to spend much time building their applications to achieve very good performance. In this article, I will provide 10 tips for writing High-Performance Web applications. My comments are not limited to ASP. NET applications, because they are only a subset of Web applications. This article is not an authoritative guide to Web Application Performance adjustment-this content can be written into a book. On the contrary, this article can be regarded as a good start point.
I often go to rock climbing before I work in an out-of-the-box environment. Before rock climbing, I always look at the routes in the Guide and read the suggestions and advice from people who used to come here. However, no matter how well the manual is, you must take actual actions before trying a specific and challenging climb. Similarly, before you are faced with performance problems or operating a high-throughput site, you can only find ways to write high-performance web applications.
Our personal experience comes from Microsoft ASP. the net team is engaged in the underlying architecture program manager, runs and manages www.asp.net, and assists in the architecture community server process experience. The Community Server is a few famous ASP. the next version of the. NET application. net forums ,. text and ngallery are integrated into one platform ). I'm sure these skills that have helped me will also benefit you.
You should consider separating applications into several logical layers. You may have heard of the term 3-layer (or N-layer) physical architecture. They are typically defined architectural patterns that define physical division of functions across processes and/or hardware. More hardware can be added when the system needs to be scaled. However, you should always avoid performance problems related to processes and machine busyness. Therefore, whenever possible, run the ASP. NET page and related components in the same application.
Because of the boundary separation between codes and layers, using web services or remote calls will reduce the performance by more than 20%.
The data layer is slightly different, because the database usually uses dedicated hardware. However, the processing cost of the database is still very high. Therefore, when optimizing the code, the performance of the data layer should be the top concern.
Before you begin to solve the performance problems of your application, you must analyze the application to identify the problem. Obtaining key performance counter values (for example, the value of the performance counter for percentage of the time spent in garbage collection) is also important for finding out where the application is most time-consuming. Intuitively, time consumption can often be found.
There are two types of performance improvements described in this article: large-scale optimization, such as using ASP. NET cache and repeated micro-optimization. These micro-optimizations are sometimes interesting. Small changes to the Code can cause a large volume of movement, resulting in thousands of calls. For large optimization, you may see a huge jump in overall performance. For micro-optimization, a given request may only be adjusted in milliseconds, but the results may be greatly improved based on the total number of requests per day.

Data Layer Performance

When adjusting the performance of an application, there is a simple touchstone. You can use it in order: Check whether the code accesses the database? If yes, how long does it take to access the database? Note that the same test can also be applied to code that uses Web services or remote calls, but this is not covered in this article.
If you must have database requests in a specific code flow and want to evaluate other aspects, for example, if you want to optimize string processing first, put it for the moment, first, follow the priorities set above. Unless you have unusual performance problems, your time should be spent trying to optimize the connection to the database, the returned data volume, and how long it takes to communicate with the database.
With this summary, let's take a look at the ten tips that can help you improve application performance. I will start with a change that can achieve the most significant results.

Tip 1 -- Return multiple result sets

Review your database code to see if there are more than one access request to the database. In this way, each round-trip to the database will reduce the number of requests that your application can process per second. By returning multiple result sets in a single database request, you can reduce the overall communication time with the database. At the same time, you will make the system more scalable, Because you reduce the burden on the database server to process requests.
Although you can use dynamic SQL to return multiple result sets, I prefer stored procedures. Whether to store business logic in stored procedures is a matter of debate, but I think that if the Stored Procedure logic can constrain the returned data (reduce the dataset size, the time transmitted over the network and the logic layer do not have to worry about data). This is a good thing.
Use the sqlcommand instance and Its executereader method to process various strong business classes. You can call nextresult to move the result set Pointer Forward. Figure 1 demonstrates how to process several sessions with arraylists. Returning only the data you need from the database also reduces the memory allocation on the server.

Tip 2-Paging Data Access

ASP. NET DataGrid provides excellent capabilities: Data paging support. When the paging function in the DataGrid is enabled, only a fixed number of records are displayed each time. In addition, the page-based user interface is displayed at the bottom of the DataGrid for navigation records. The page-based user interface allows you to navigate forward and backward to display a fixed number of records at a time.
One disadvantage is that you need to set all data to this grid control (gird) when using the DataGrid paging ). For example, your data layer must return all data, and the DataGrid filters out all displayed records based on the current page. When you use the DataGrid for paging, if 100,000 records are returned, 99,975 records for each request will be discarded (assuming the page size is 25 ). As the number of records increases, the performance of this application will suffer, because more and more data will be sent each request.
A good way to write better paging code is to use stored procedures. Figure 2 demonstrates an example of using the orders table in the northwind database to paging through the stored procedure. It's easy, as long as you pass the index and page size on the page. The corresponding result set is calculated first and then returned.
In Community Server, we have compiled several page controls to complete data paging. You will see that I used the ideas discussed in tip 1 to return a result set from a stored procedure: the total number of records and the requested data.
The total number of returned records varies depending on the queried results. For example, a where clause can be used to constrain the returned data. To calculate the total number of pages displayed on the page, the total number of records returned must be known. For example, if there are 1,000,000 records and 1,000 records are filtered by a where clause, the paging logic must know the total number of records so that they can be correctly displayed on the paging user interface.

Tip 3: Connection Pool

Establishing a TCP connection between a web application and SQL Server is an expensive operation. Microsoft developers have been using the connection pool technology for a long time, which enables them to reuse the connection to the database. Instead of creating a new TCP connection for each request, the new connection is established only when no connection is available in the connection pool. When the connection is closed, it is returned to the connection pool, where it still maintains a connection with the database, opposite to a fully disconnected TCP connection.
Of course, you need to guard against leaked connections. When processing is complete, you must close the connection. Repeat once: No matter how people boast about the garbage collection feature in the Microsoft. NET Framework, each time you finish processing, you must explicitly call the close or dispose method of the connection object. Do not count on the Common Language Runtime (CLR) to regularly clear and close connections for you. CLR will eventually destroy the class and forcibly close the connection, but you cannot guarantee that the object's garbage collection will take effect at that time.
To make full use of the connection pool, there are several rules that must be understood. First, open the connection, process it, and then close the connection. I 'd rather open and close the connection for each request multiple times, and do not keep the connection open state and transmit it between different methods. Second, use the same connection string (if you use integrated identity check, you must use the same thread identity ). If you do not need the same connection string, for example, you can customize the connection string based on the login user, you will not be able to get the same optimal value provided by the connection pool. When simulating a large number of users, if you use integrated identity check, your connection pool will be greatly reduced .. Net CLR data performance counters are useful when attempting to track any performance issues related to the connection pool.
No matter when your application is connected to resources running in other processes, for example, a database, the time consumed by sending and receiving data and the number of round trips are optimized. To achieve better performance, we should first optimize any type of busy processes in the application.
The application layer includes the connection to the data layer and the logic for converting data into meaningful class instances and business processing. Taking Community Server as an example, you need to process the forums and threads collections in it, as well as business rules such as application license. Particularly, the cache logic is also implemented.

Tip 4 -- ASP. NET cache API

One of the top priorities before writing code is to build the application layer to the maximum extent and explore the cache features of ASP. NET.
If your component runs in an ASP. NET application, you only need to reference system. Web. dll in the application project. When you need to access the cache, use the httpruntime. cache attribute (the same object can also be accessed through page. cache and httpcontext. cache ).
There are several rules for buffering data. First, if the data can be used multiple times, buffering is a good post-selection solution. Second, if data is common data rather than private data for a given request or user, it is best to select a buffer. If the data user or request is dedicated and requires a long storage period but may not be frequently used, the buffer is still used. Third, the principle that is often ignored is that sometimes too many things are buffered. Generally, on x86 machines, to reduce the chance of memory insufficiency errors, the number of private bytes running a process should not exceed MB. Therefore, there should be a buffer limit. In other words, you may be able to reuse the results of a computation, but if the computation has 10 parameters, you may try to buffer 10 replicas, which may cause you trouble. The most common fault tolerance provided by ASP. NET is the memory insufficiency error caused by overwriting buffering, especially for large datasets.
The cache has several important features that must be understood. First, the cache implements the least recently used (least-recently-used) algorithm, allowing ASP. net force cache cleanup operation-if the available memory drops to a low level-the unused items are automatically deleted from the cache. The second is that the cache supports the dependency expiration feature, which can forcibly include the time, key value, and file failure. Time is often used, but ASP. NET 2.0 introduces a more powerful failure type: Database buffer failure. That is, when the data in the database changes, the entries in the buffer will be automatically deleted. For more information about database buffer invalidation, see the cutting edge column in The July 2004 issue of Dino Esposito in msdn magazine. For the architecture of the buffer, see Figure 3.

 

Figure 3 ASP. NET Cache

Tip 5: Pre-Request Buffer (Per-request caching)

Before this article, I mentioned that small changes made to frequently executed code blocks may produce great improvements in overall performance. I call one of my special prerequest caches (Per-request caching ).
Because the cache API is designed to buffer long-term data or until a condition is met, the pre-request buffering intent is used to buffer the data during the request. The specific code flow is frequently accessed by each request, but the data only needs to be picked up, applied, modified or updated once. This is too theoretical. Let's take a look at a specific example.
In the Forums (ForUM) application of the Community Server, each server control used on a page requires personalized data to determine which skin and style page to use and other personalized data, some of the data can be buffered for a long time, but some of the data, such as the skin of the control, is picked up only once in a single request and reused multiple times during the request execution.
ASP. NET httpcontext is used to buffer requests. The instance of httpcontext is created with each request and can be accessed anywhere during the request execution using the httpcontext. Current attribute. The httpcontext class has a special items set attribute. The objects and data added to the items set are only cached during the request. Just as you can use cache to save frequently-used data, you can use httpcontext. items to save data used only in a certain pre-request. The logic behind this background is simple: when the data does not exist, it is added to the httpcontext. items set, and the data found in httpcontext. items is simply returned in subsequent concurrent searches.

Tip 6-background processing

Your code flow should be as fast as possible, right? You may find it costly to complete each or every n requests multiple times. Sending an email or parsing and checking the validity of the input data is an example.
When ASP. NET forums 1.0 is regenerated and integrated into the Community Server, we find that the Code process for adding a new post is very slow. Each time you add a post, the application must first make sure that there are no duplicate posts, and then use the "badword" filter to parse the emoticon image, Mark and index the post, if necessary, add the post to the corresponding queue to check the validity of the attachment. After the post is published, the email notification will be sent to the subscription user. Obviously, too much work is done here.
We found that most of the time is spent on the index logic and sending e-mail. Indexing a post is a very time-consuming operation. In addition, the built-in system. Web. Mail function must be connected to the SMTP server and send emails in sequence. When the number of subscribers for a specific post or topic increases, the execution time of the addpost function will grow.
Not every request requires indexing emails. We think it is best to Perform Batch Processing in a centralized manner and index only 25 posts at a time or send emails every five minutes. The code we decided to use is the same as the code I used in the cache invalidation of the prototype database, and is eventually incorporated into Visual Studio 2005.
The timer class in the namespace system. threading is very useful, but it is rarely known in the. NET Framework, at least for Web developers. Once created, Timer calls the specified callback function for a thread in the thread pool at a customizable interval. This means that you can implement the Code without entering the request to the ASP. NET application. This is the most suitable scenario for background processing. You can also perform indexing or email sending in this background processing mode.
However, this technology has several problems. If your application domain is closed, the timer instance will stop triggering its events. In addition, because CLR has a hard hurdle, that is, the number of threads for each process is fixed, you may be in serious server load. At this time, there may be no threads to process the timer, this causes latency. To minimize the probability of such a situation, ASP. NET reserves a certain number of Idle threads in the process and uses only some threads to process requests. However, if you have a lot of asynchronous processing, this will cause problems.
Due to space limitations, Code cannot be listed here, but you can download digested examples from http://www.rob-howard.net. Here are the slides and Demos displayed by blackbelt teched 2004.

Tip 7: page output cache and Proxy Server

ASP. NET is your presentation layer (or should be). It consists of pages, user controls, server controls (httphandlers and httpmodules) and the content they generate. If you have an ASP. NET page, whether it is HTML, XML, image or any other data output, and you run this code for each request and generate the same output, it is best to use the page output cache at this time.
You just need to add this line of code at the top of the page:

<% @ Page outputcache varybyparams = "NONE" Duration = "60" %>
You can effectively generate an output for this page and reuse it multiple times within 60 seconds. At this time point, the page will be re-executed and the output will be added to ASP again.. Net cache. This behavior can also be done with some low-level programming APIs. The output cache has several configurable settings, such as the varybyparams attribute. Varybyparams is not required, but you can specify the http get or http post parameter to change the cache entry. For example, default. aspx? Report = 1 or default. aspx? Report = 2 You can simply set varybyparam = "report" to cache the output. The additional parameters are named and separated by semicolons.
When using the output cache mechanism, many people do not know that ASP. NET pages also generate a set of downstream Cache Server HTTP headers, such as the HTTP headers used by Microsoft Internet Security and Acceleration Server or Akamai. When an HTTP cache header is set, the document can be cached to these network resources, so that you do not have to return the original server to respond to client requests.
However, using the page output cache does not make your application more efficient, but it can potentially reduce the server load by caching documents through the downstream cache technology. Of course, this can only be asynchronous content. Once the downstream cache is implemented, you will not be able to see any requests or implement identity authentication to prevent access to it.

Tip 8 -- run IIS 6.0 (if only used for Kernel cache)

If you do not run IIS 6.o( Windows Server 2003), you will not receive any major performance improvements from Microsoft Web servers. In Tip 7, I talked about the output cache. In IIS 5.0, the request arrives at IIS and then ASP. NET. When cache is used, httpmodule in ASP. NET accepts the request and returns content from the cache.
If you use IIS 6.0, some clever features are called Kernel cache, and it does not need to change any code to ASP. NET. When ASP. NET caches the request, the IIS kernel cache receives a copy of the cached data. When the request comes from a network drive, the kernel-level driver (not in context conversion in user mode) receives the request. If the request is cached, the data is directly returned and executed. This means that when you use IIS Kernel Mode cache and ASP. NET cache, you will see incredible performance results. During the development of ASP. NET in Visual Studio 2005, I am the program manager responsible for ASP. NET performance. The work of developers is really great, and I usually read the report every day. Kernel Mode cache results are always the most interesting. A typical scenario is that requests/responses often make the network saturated, but IIS only accounts for 5% of the CPU. Amazing! Of course, there are other reasons for using IIS 6.o, but the kernel mode cache is obvious.

Tip 9-use gzip for compression

Although gzip compression is not a required server performance technique (because you may see an increase in CPU usage), it can reduce the number of bytes sent by the server. In this way, the page is faster and the bandwidth usage is reduced. The compression effect depends on the data sent and whether the client browser supports this compression (IIS only sends the data to browsers that support gzip, such as IE 6.0 and Firefox ), this allows the server to process more requests per second. In fact, as long as you reduce the number of returned data, you can increase the number of requests processed per second.
The good news is that gzip compression is a built-in feature of IIS 6.0 and is better than it is used in IIS 5.0. However, to enable gzip compression in IIS 6.0, you may not be able to conveniently set it in the Properties dialog box of IIS. The IIS team built excellent gzip compression capabilities into the server, but ignored creating a management user interface that enables the compression feature. To enable the gzip compression mechanism, you must go deep into the xml configuration settings of IIS (you must be familiar with the configuration ). By the way, I would like to thank Scott Forsyth of orcsweb for helping me solve this problem on several http://www.asp.net/servers in orcsweb.
Instead of including the entire process in this article, read the article by Brad Wilson on IIS6 compression. The Microsoft Knowledge Base also has an article about enabling the compression feature for aspx: Enable aspx compression in IIS. However, you must note that dynamic compression and kernel cache are mutually exclusive in IIS 6.0 due to implementation details.

Tip 10: visual status of server controls

View state is a strange name for ASP. NET. It hides the input field in the generated page to store certain State data. When the page is sent back to the server, the server can parse it, check its validity, and apply the status data to the Control tree of the page. The visual state is a very powerful capability because it allows the State to be continued by the client and does not require cookies or server memory to store the state. Many ASP. NET Server controls use visual states to perform settings that are made during interactions with page elements. For example, they save the current page display page when paging data.
However, the use of visual status has many disadvantages. First, it increases the burden on the whole page, whether in the request or in the provision of services. Additional overhead is also incurred when serialization or deserialization is returned to the server's visual State data. The visible state will increase the memory allocation of the server.
The most famous server control is the DataGrid, which has never been used in a visual state, even when it is not needed. The viewstate attribute is enabled by default, but you can disable it at the page control or page level if you do not need it. In a control, you only need to set enableviewstate to false, or use the following global settings on the page:

<% @ Page enableviewstate = "false" %>
If you do not send back to a page, or the control is always re-generated every time you request the page, you should disable the visual status at the page level.

Conclusion

I have provided you with some tips that I think are useful for writing high-performance ASP. NET applications. As I mentioned at the beginning of this article, this is a very basic guide, not the final conclusion on ASP. NET performance. (For more information on improving the performance of ASP. NET applications, see improving ASP. NET performance.) You can find the best way to solve specific performance problems only through your own experience. In any case, these skills will benefit you in solving the problem. In the process of software development, every application has its own unique side. Nothing is absolute.

-- Common performance myth

One of the most common myths is that C # code is faster than Visual Basic code. Such a statement is untenable, although there are some performance barrier actions that C # does not have in Visual Basic, such as explicitly declaring the type. However, if you follow good programming practices, there is no reason to explain that Visual Basic and C # Code cannot be executed with almost the same performance. Simply put, the same code produces the same results.
Another myth is that background code is faster than Inline code, which is definitely not true. Performance has nothing to do with where your ASP. NET application code is, whether it is a background code file or Inline on the ASP. NET page. Sometimes I prefer to use Inline code because changes do not produce the same update cost as the background code. For example, if the background code is used, the entire background dll must be updated, which may cause panic.
The third myth is that components are faster than pages. This exists in the classic ASP, because the compiled COM server is much faster than VBScript. However, ASP. NET is not applicable to both pages and components. Whether your code is embedded on the page in the form of background code or separated components, the performance difference is not big. However, this organizational form can better logically group functions without any difference in performance.
The last myth I want to clarify is that Web services are used to implement functions between two applications. Web services should be used to connect to heterogeneous systems or provide remote access to system functions and behaviors. It should not be used for internal connections of two identical systems. Although it is easy to use, there are many other better options. The worst thing is to use web services for communication between ASP and ASP. NET applications on the same server.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.