Data Layer performance skills 1. Return multiple result sets Skills 2. Paging data access skills 3. connection pool skills 4. asp. NET cache API skills 5 request cache skills 6 background processing skills 7 page output cache and Proxy Server skills 8 run IIS6.0 (as long as used for Kernel cache) skills 9 use Gzip compression skills 10 servers
Data Layer performance skills 1. Return multiple result sets Skills 2. Paging data access skills 3. connection pool skills 4. ASP. NET cache API skills 5 request cache skills 6 background processing skills 7 page output cache and Proxy Server skills 8 run IIS 6.0 (as long as used for Kernel cache) skills 9 use Gzip compression skills 10 servers
Content on this page
|
Data Layer Performance |
|
Tip 1: Return multiple result sets |
|
Tip 2-Paging Data Access |
|
Tip 3-Connection Pool |
|
Tip 4-ASP. NET cache API |
|
Tip 5-Cache per request |
|
Tip 6-background processing |
|
Tip 7-page output cache and Proxy Server |
|
Tip 8-run IIS 6.0 (for Kernel cache only) |
|
Tip 9-use Gzip for compression |
|
Tip 10-Server Control view status |
|
Summary |
The simplicity of using ASP. NET to write Web applications is unbelievable. Because it is so simple, many developers will not spend time designing the structure of their applications to achieve better performance. In this article, I will introduce 10 tips for writing High-Performance Web applications. However, I will not limit these suggestions to ASP. NET applications, because these applications are only part of Web applications. This article does not serve as an authoritative guide to adjust the performance of Web applications-I am afraid this issue cannot be easily explained throughout this book. Please take this article as a good start point.
I used to like rock climbing before I became a workaholic. Before conducting any large-scale rock climbing activities, I will first carefully check the routes in the Guide and read the suggestions made by previous visitors. However, no matter how good the guide is, you need a real rock climbing experience before you can try a particularly challenging climb. Similarly, you can only learn how to write high-performance Web applications when you are faced with performance problems or a high-throughput website.
My personal experience comes from ASP. NET Department as the infrastructure program manager experience, during which I run and manage www. ASP.. NET helps design the structure of the Community Server. The Community Server is a few famous ASP.. NET Applications (ASP. NET Forums ,. text and nGallery ). I'm sure some of the skills that have helped me in the past will certainly help you.
You should consider dividing your application into several logical layers. You may have heard of Layer 3 (orNLayer) the term physical architecture. These are usually well-defined architectures that physically separate functions between processes and/or hardware. When the system needs to be expanded, more hardware can be easily added. However, there will be a performance drop related to processes and machine jumps, so we should avoid this. Therefore, if possible, try to run ASP. NET pages and related components in the same application.
Because of code separation and the boundary between layers, using Web services or remote processing will reduce the performance by 20% or more.
The data layer is a bit different, because it is usually better to have database-specific hardware. However, the cost of jumping a process to a database is still very high. Therefore, the performance of the data layer is the first thing you need to consider when optimizing the code.
Before going deep into the performance of the application to fix the problem, make sure that the application is analyzed to identify the specific problem. A primary performance counter (for example, a counter indicating the percentage of time required for garbage collection) is also useful for finding out where the application spends its primary time. However, the time spent is usually not intuitive.
This article describes two types of performance improvements: large-scale optimization (such as using ASP. NET cache) and small-scale optimization. These small optimizations are sometimes particularly interesting. If you make a small change to the code, you will get a lot of time. With large-scale optimization, you may see a great leap in overall performance. When small optimization is used, a specific request may only save several milliseconds, but all requests add up every day, which may lead to huge improvements.
Data Layer Performance
When talking about the performance adjustment of applications, there is a test paper that can be used to prioritize work: Is the code accessing the database? If so, what is the frequency? Please note that this same test can also be applied to code that uses Web services or remote processing, but this article does not describe this content.
If a database request must be made in a specific code path and you think that you need to optimize other fields (such as string operations), stop and then perform this test. If your performance problem is not very serious, you 'd better spend some time optimizing the time related to the database, the returned data volume, and the round-trip frequency of the database.
After learning about the general information, let's take a look at ten tips that may help improve the performance of your application. First, let me talk about the changes that may cause the greatest change.
Back to Top
Tip 1: Return multiple result sets
Check your database code carefully to see if there are multiple request paths for accessing the database. Every such round-trip will reduce the number of requests that the application can provide per second. By returning multiple result sets in a single database request, you can save the total time required to communicate with the database. At the same time, because it reduces the work of the database server to manage requests, it also makes the system more scalable.
Although dynamic SQL statements can be used to return multiple result sets, I prefer stored procedures. There is still some controversy about whether the business logic should reside in the stored procedure, but I think, if the logic in the stored procedure can constrain the returned data (reduce the dataset size, shorten the time spent on the network, and do not need to filter the data at the logic layer), this should be the case.
When you use a SqlCommand instance and Its ExecuteReader method to fill in a strong business class, you can call NextResult to move the result set Pointer Forward.Figure 1Displays the example sessions that use the type class to fill in several arraylists. Returning data only from the database will further reduce the memory allocation on the server.
Back to Top
Tip 2-Paging Data Access
ASP. NET DataGrid has a good function: Data paging support. When pagination is enabled in the DataGrid, a fixed number of records are displayed at one time. In addition, the page UI is displayed at the bottom of the DataGrid to facilitate navigation between records. This paging UI enables you to navigate forward and backward between the displayed data and display a fixed number of records at a time.
There is also a small twist. Using the DataGrid page requires that all data be bound to the grid. For example, if your data layer needs to return all data, the DataGrid filters all records displayed based on the current page. If 100,000 records are returned for paging through the DataGrid, 99,975 records are discarded for each request (assuming that the size of each page is 25 records ). When the number of records increases, the performance of the application will be affected, because more and more data must be sent for each request.
To write paging code with better performance, a great way is to use stored procedures.Figure 2Displays a sample stored procedure for the Orders table in the Northwind database by page. In short, all you have to do is pass the page index and page size. Then, an appropriate result set is calculated and returned.
On the Community Server, we have compiled a paging Server Control to complete all data paging. As you will see, I am using the philosophy discussed in tip 1 to return two result sets from a stored procedure: the total number of records and request data.
The total number of returned records may vary according to the queried results. For example, the WHERE clause can constrain the returned data. To calculate the total number of pages displayed in the page UI, you must understand the total number of records to be returned. For example, if there are a total of 1,000,000 records and a WHERE clause is used to filter them into 1000 records, the paging logic needs to know the total number of records to correctly display the paging UI.
Back to Top
Tip 3-Connection Pool
In Web applications and SQL Server™Setting TCP connections between them may be a very resource-consuming operation. Microsoft developers have been able to use the connection pool for a while so far, enabling them to reuse the database connection. Instead of setting a new TCP connection for each request, they set a new connection only when there is no connection in the connection pool. When the connection is closed, it will return the connection pool, in which it will maintain the connection with the database, instead of completely damaging the TCP connection.
Of course, you need to be careful whether the connection will leak. When you use the connection, you must close these connections. Repeat it again: No matter whether anyone is interested in Microsoft ?. NET Framework. Do not believe that the Common Language Runtime Library (CLR) will clear and close the connection for you at a predetermined time. Although the CLR will eventually destroy the class and force the connection to close, it cannot be guaranteed when the garbage collection for the object actually happens.
To use the connection pool in an optimal way, you must follow some rules. First open the connection, execute the operation, and then close the connection. If this is required, you can enable or disable connections for each request multiple times (Best Practice 1 ), but do not keep the connection open and use different methods to pass in and out. Second, use the same connection string (use the same thread ID if integrated identity authentication is used ). If the same connection string is not used, for example, the custom connection string based on the login user, you will not be able to get the same optimization value provided by the connection pool. If you use integrated authentication and simulate a large number of users, the efficiency of the connection pool will be greatly reduced. The. net clr data performance counter may be useful when you try to track any performance issues related to the connection pool.
Every time an application connects to a resource, such as a database running in another process, you should focus on the time it takes to connect to the resource, the time it takes to send or retrieve data, and the number of round trips for optimization. Optimizing any kind of process jumps in an application is the first thing to achieve better performance.
The application layer contains the logic for connecting the data layer and converting data into meaningful class instances and business processes. For example, for a Community Server, you need to fill in the Forums or Threads set in it and apply business rules (such as permissions). The most important thing is to execute the cache logic in it.
Back to Top
Tip 4-ASP. NET cache API
Before writing application code lines, a primary operation is to design the application layer structure to maximize the use of the ASP. NET cache function.
To run your components in an ASP. NET application, you only need to include a System. Web. dll reference in the application project. When you need to access the Cache, use the HttpRuntime. Cache attribute (this object can also be accessed through Page. Cache and HttpContext. Cache ).
There are several rules for cached data. First, if the data may be used multiple times, this is a good alternative to using the cache. Second, if the data is generic and not specific to a specific request or user, it is also a good alternative to using the cache. If the data is specific to a user or request, but has a long validity period, it can still be cached, but this situation may not be frequently used. Third, a rule that is often ignored is that sometimes you may cache too much. Generally, on an x86 computer, to reduce the chance of memory insufficiency errors, you may want to run the process in a dedicated byte not higher than 800 MB. Therefore, the cache should be limited. In other words, you may be able to reuse a computing result, but if the calculation uses 10 parameters, you may have to Cache 10 arrays, which may cause you trouble. The most common support that requires ASP. NET is the memory insufficiency error caused by excessive caching, especially for large datasets.
Figure 3ASP. NET Cache
The cache has several excellent functions that you need to understand. First, the cache will implement the least recently used algorithms, so that ASP. NET can force cache cleanup when the memory running efficiency is low-automatically delete unused items from the cache. Second, the cache supports mandatory expired dependencies. These dependencies include time, key, and file. Time is often used, but for ASP. NET 2.0, a new invalidation type with more powerful features is introduced: database cache failure. It means that items in the cache are automatically deleted when the data in the database changes. For more information about database cache invalidation, seeMSDN?MagazineDino Esposito Cutting Edge column in July 2004. For more information about the Cache architecture, see Figure 3.
Back to Top
Tip 5-Cache per request
In the previous sections of this article, I mentioned that small improvements that often traverse code paths may cause greater overall performance gains. One of these minor improvements is definitely my favorite, and I call it "cache per request ".
The cache API is designed to cache data for a long period of time or to meet certain conditions, however, each request cache means that only the data is cached as the request duration. For each request, you must frequently access a specific code path, but the data only needs to be extracted, applied, modified, or updated once. This may sound a little theoretical, so let's take a specific example.
In Forum applications of Community servers, each server control used on the page requires personalized data to determine the appearance, style sheet, and other personalized data. Some of the data can be cached for a long time, but some of the data is extracted only once for each request, and then reused multiple times during the execution of the request, such as for the control's appearance.
To Cache each request, use ASP. NET HttpContext. An HttpContext instance is created for each request. During this request, the instance can be accessed from any location in the HttpContext. Current attribute. The HttpContext class has a special Items set attribute. The objects and data added to this Items set are cached only during the duration of the request. Just as you can use the cache to store frequently accessed data, you can also use HttpContext. Items to store data that is only used based on each request. The logic behind it is very simple: when the data does not exist, it is added to the HttpContext. Items set. In later searches, only the data in HttpContext. Items is returned.
Back to Top
Tip 6-background processing
The path to the code should be as fast as possible, right? Sometimes you may feel thatNA large number of resources are required for the task to be executed once. This is an example of sending an email or analyzing and verifying incoming data.
When analyzing ASP. NET Forums 1.0 and re-constructing the content that makes up the Community Server, we found that adding the newly posted code path was very slow. Each time a new post is added, the application first needs to ensure that there are no repeated posts, and then the "bad words" filter must be used to analyze the post and analyze the posting character illustrations, add tags and indexes for the post. Add the post to the appropriate queue upon request, verify the attachment, and then immediately send an email notification to all subscribers. It is clear that this involves many operations.
Studies have found that most of the time is spent on Indexing logic and sending emails. Indexing a post is a very time-consuming operation. people found that the built-in System. Web. Mail function needs to connect to the SMYP server and then send emails continuously. When the number of subscribers in a specific posting or topic area increases, it takes longer and longer to execute the AddPost function.
No email indexing is required for each request. Ideally, we would like to batch this operation, indexing 25 posts at a time or sending all emails every five minutes. We decided to use the code that was previously used to prototype the data cache invalidation. This invalidation was used to finally enter Visual Studio®2005 of the content.
The Timer class in the System. Threading namespace is very useful, but it is not very famous in. NET Framework, at least for Web developers. After being created, the Timer class calls the specified callback for a thread in the ThreadPool at a configurable interval. This means that you can set the Code so that it can be executed without passing in requests to ASP. NET applications. This is an ideal scenario for background processing. You can also perform operations such as indexing or sending emails in this background process.
However, this technology has several problems. If the application domain is uninstalled, the timer instance stops triggering its event. In addition, the CLR has a hard standard for the number of threads for each process, so this may happen: the server load is very heavy, and the timer may not have a thread that can be completed based on it, to some extent, latency may occur. ASP. NET tries to minimize the chance of the above situation by retaining a certain number of available threads in the process and only using a portion of the total thread for request processing. However, if you have many asynchronous operations, this may be a problem.
There is not enough space to place the code, but you can download an example that you can understand, the URL is www.rob-howard.net. Please take a look at the slides and demos in the Blackbelt TechEd 2004 demonstration.
Back to Top
Tip 7-page output cache and Proxy Server
ASP. NET is your presentation layer (or your presentation layer). It consists of pages, user controls, server controls (HttpHandlers and HttpModules), and the content they generate. If you have an ASP.. NET page, which generates output (HTML, XML, image, or any other data). When you run this code for each request, it generates the same output, you have an excellent alternative for page output caching.
Add the content of this row to the top of the page
<%@ Page OutputCache VaryByParams="none" Duration="60" %>
You can efficiently generate an output for this page and reuse it multiple times. The maximum time is 60 seconds. At this time, the page will be re-executed and the output will be added to ASP again.. NET cache. You can also perform this operation by using some low-level programmatic APIs. There are several configurable settings for the output cache, such as the VaryByParams attribute just mentioned. VaryByParams is requested, but you can also specify http get or http post parameters to change the cache items. For example, you only need to set VaryByParam = "Report" to set the default. aspx? Report = 1 or default. aspx? Report = 2. You can specify other parameters by specifying a list separated by semicolons.
Many people do not know when to use the output cache. The ASP. NET page will also generate some HTTP headers located below the cache Server, such as the headers used by Microsoft Internet Security and Acceleration Server or Akamai. After the HTTP cache header is set, documents can be cached on these network resources, and client requests can be satisfied without returning the original server.
Therefore, using the page output cache will not make your application more efficient, but it may reduce the load on the server, because the downstream cache technology will cache documents. Of course, this may only be anonymous content; once it becomes downstream, you will no longer see these requests and will no longer be able to perform authentication to block access to it.
Back to Top
Tip 8-run IIS 6.0 (for Kernel cache only)
If you are not running IIS 6.0 (Windows Server? 2003), then you miss some good performance enhancements in Microsoft Web server. In Tip 7, I discussed the output cache. In IIS 5.0, the request is sent through IIS and then enters ASP. NET. When cache is involved, HttpModule in ASP. NET receives the request and returns the content in the cache.
If you are using IIS 6.0, you will find a good small feature called Kernel cache, which does not need to make any code changes to ASP. NET. When the request is output by ASP. NET, the IIS kernel cache receives a copy of the cached data. When a request comes from a network driver, the kernel-level driver (switch to user mode without context) will receive the request. If the request is cached, the cached data is refreshed to the response and then executed. This means that when you use the kernel mode cache together with IIS and ASP. NET output cache, you will see untrustworthy performance results. During the development of Visual Studio 2005 in ASP. NET, I was a program manager responsible for ASP. NET performance. The developer completes the specific work, but I want to see all the reports on a daily basis. Kernel Mode cache results are always the most interesting. The most common feature is that the network is full of requests/responses, while the CPU usage during IIS running is only about 5%. This is amazing! Of course, there are other reasons for using IIS 6.0, but the kernel mode cache is the most obvious one.
Back to Top
Tip 9-use Gzip for compression
Although gzip is not necessarily a server performance technique (because you may see an improvement in CPU usage), gzip compression can reduce the number of bytes sent by the server. This makes people think that the page speed is faster and the bandwidth usage is reduced. Depending on the sent data, the degree of compression, and whether the client browser supports (IIS only sends gzip-compressed content to clients that support gzip compression, such as Internet Explorer 6.0 and Firefox ), your server can serve more requests per second. In fact, requests per second are increased almost every time you reduce the number of returned data.
Gzip compression has been built into IIS 6.0, and its performance is much better than the gzip compression used in IIS 5.0. This is good news. Unfortunately, when you try to enable gzip compression in IIS 6.0, you may not be able to find this setting in the Properties dialog box of IIS. The IIS team added excellent gzip functionality to the server, but forgot to include a management UI for enabling the functionality. To enable gzip compression, you must go deep into the XML configuration settings of IIS 6.0 (this will not cause heart weakness ). By the way, this is due to Scott Forsyth of OrcsWeb, who helped me raise this question about the www.asp.net server hosted on OrcsWeb.
This article will not describe the steps. Please read the article by Brad Wilson at IIS6 Compression. There is also an article about how to Enable Compression for ASPX on Enable ASPX Compression in IIS. However, due to implementation details, dynamic compression and kernel cache cannot exist in IIS 6.0.
Back to Top
Tip 10-Server Control view status
View status is an interesting name used to indicate ASP. NET that stores some status data in the hidden output field of the generated page. When this page is posted back to the server, the server can analyze, verify, and apply the view status data back to the Control tree on this page. View status is a very powerful function because it allows the status to be maintained with the client and can be saved without cookie or server memory. Many ASP. NET Server controls use the view status to maintain the settings created during interaction with page elements, such as the current page displayed when data is paged.
However, the use of view status also has some disadvantages. First, the service or request page will increase the total load of the page. Additional overhead also occurs when views posted back to the server are serialized or deserialized. Finally, the view status increases the memory allocation on the server.
Several server controls tend to over-use the view status even if they are not needed. The most famous one is the DataGrid. ViewState is enabled by default, but you can disable it at the control or page level if you do not need it. In the control, you only need to set the EnableViewState attribute to false, or use the following settings on the page to set it globally:
<%@ Page EnableViewState="false" %>
If you do not send the page back, or you always regenerate the Page Control for each request, you should disable the view status at the page level.
Back to Top
Summary
I 've told you some tips that I think are helpful when writing high-performance ASP. NET applications. As I mentioned earlier in this article, this is a preliminary Guide and is not the final result of ASP. NET performance. (For information on Improving the Performance of ASP. NET applications, see Improving ASP. NET Performance .) The best solution to specific performance problems can be found only through your own hands-on experience. However, these tips should provide you with some good guidance during your journey. There is almost no absolute thing in software development; every application is unique.
See "Common Performance Myths" in the abstract ".
Rob HowardHe is the founder of Telligent Systems and specializes in high-performance Web applications, knowledge base management, and collaboration Systems. Rob was previously employed by Microsoft, where he helped design the infrastructure of ASP. NET 1.0, 1.1, and 2.0. To contact Rob, visitRhoward@telligentsystems.com.
Http://www.microsoft.com/china/msdn/library/webservices/asp.net/us0501ASPNETPerformance.mspx? Mfr = true