1 Close the database connection immediately after use
Accessing a database resource requires creating a connection to open a connection and close a connection several operations these procedures require several times to exchange information with the database for authentication, comparing server-funded
Connection pooling in source asp.net (Connection pool) improving the performance impact of open and close databases the system places the user's database connection in a connection pool, takes it out, pulls back the connection when it is closed, waits for the next connection request
The size of the connection pool is limited, if the connection pool is maximized and still requires a connection to be created, it will have a significant impact on performance. Therefore, only when the database connection is really needed to open the connection, immediately after the use of the shutdown, so as to minimize the time to open the database connection, to avoid exceeding the connection limit
Use (recommended)
Using (SqlConnection conn=new SqlConnection (ConnStr))
{}//do not have to show off
Or
Try{conn. Open ();}
catch{}
Finally{conn. Close ();}
2 use stored procedures as much as possible and optimize query statements
A stored procedure is a set of precompiled SQL statements stored on a server. A batch file stored procedure similar to a DOS system has immediate access to the database, and information processing is extremely quick to use stored procedures to avoid multiple compilations of commands, which, once executed, reside in the cache. Just call the binary in the cache at a later time. In all data access methods provided by the. NET Framework, SQL Server based data access is a recommended choice for generating High-performance scalable WEB applications You can use a managed SQL server provider by using the Compile stored procedures instead of special queries for additional performance improvements
In addition, the stored procedure runs on the server side, independent of the ASP.net program, easy to modify, and most importantly, it can reduce the transmission of database operation statements in the network.
Optimizing query Statements
Asp. NET, ADO connection consumes a large amount of resources, the longer the SQL statement runs, the longer it takes to consume system resources. Therefore, try to use optimized SQL statements to reduce execution time for example, do not include subqueries in query statements, try to return only useful data fields, take advantage of indexes, etc.
3 read-only data access with SqlDataReader, do not use dataset
The SqlDataReader class provides a way to read a feed-only stream retrieved from a SQL Server database if a asp.net application is created that allows you to use it, the SqlDataReader class provides more than the DataSet Class higher performance This is because SqlDataReader uses SQL Server's native network data transfer format to read data directly from a database connection. In addition, the SqlDataReader class implements the IEnumerable interface, This interface also allows you to bind data to the server control DataSet as a powerful, offline-enabled database with relatively high performance costs
Sqldataread Advantage: Read the data very quickly if you do not need to do a lot of processing for the returned data, it is recommended that you use SqlDataReader, whose performance is much better than that of Datset: You can close a connection to a database until the data is read
Datasets are read out, slow to exist in memory disadvantage: high memory footprint If you need to do a lot of processing on the returned data, you can reduce the benefits of connecting to the database by simply connecting once to the connection to the database
In general, read a large amount of data, the return data do not do a lot of processing with SqlDataReader. It is more appropriate to datset the return data. Selection of SqlDataReader and dataset depends on the realization of program function
4 binding DataBinder of data
The general binding method <%# DataBinder.Eval (Container.DataItem, "field name")%>
Use DataBinder.Eval binding don't care about data source (read or dataset) you don't have to care about the type of data the eval would convert this data object to a string at the bottom of a lot of work, the use of reflective performance is convenient, but it affects the performance of the data
Take a look at the <%# DataBinder.Eval (Container.DataItem, "field name")%> when the dataset is bound, DataItem in fact, a datarowview (if the binding is a data reader (Dataread) It is a idatarecord) so direct conversion to DataRowView, will give a great boost to performance
Binding to data is recommended to use <%# CType (Container.dataitem,datarowview). Row ("field name")%> attention to two aspects when used:
1. Add <%@ Import namespace= "System.Data"%> on the page;.
2. Note the case of the field name (to be particularly noted) if inconsistent with the query, in some cases it can lead to a slower than <%# DataBinder.Eval (Container.DataItem, field name)%> if you want to further increase the speed, you can use <%# CType (Container.dataitem,datarowview). Row (0)%> method But its readability is not high
Above is the writing of VB in C #: <%# ((DataRowView) Container.DataItem) ["Field name"]%>
5 Returning multiple result sets
Whether SqlDataReader or Datset, return multiple result sets, and then use Rd. NextResult () or DS. Tables[i] to deal with the data separately, reduce the number of duplicate connections to the database and try to use more efficient SQL instead of subsequent complex dataset two-time processing
Two page optimization
1 do not use unnecessary server controls
In ASP, a large number of server-side controls facilitate program development, but can also cause loss of performance, because the user every time the server-side control, resulting in a server-side round trip so, it is not necessary, should be less use of server control and many other cases, Rendering or data binding is more efficient in these cases than using a server control, even when using a server control template but if you want to programmatically manipulate the properties of a server control to handle server control events or save with view state, it is appropriate to use a server control
So, try to choose the functionality that HTML controls can implement on the client side (mastering JavaScript) to reduce server pressure
2 Do not use unnecessary ViewState
By default, ASP. NET enables ViewState (view state) for all server control but ViewState needs to save some information on the client, which can cause performance depletion when you must use server control, consider banning viewstate
Save server control view state only when necessary automatic view state management is the function of the server control. This feature enables server controls to repopulate their property values on a round trip (you do not need to write any code) but, because the server control's view state is back and forth from the server in a hidden form field, So this feature does have an impact on performance you should know in what situations view state can help, in which cases it affects the performance of the page for example, if you bind a server control to data on each round trip, the saved view state is replaced with the new value obtained from the data-binding operation. Disabling view state can save processing time
By default, view state is enabled for all server controls to disable view state, set the control's EnableViewState property to False, as shown in the following DataGrid server control example
<asp:datagrid enableviewstate= "false" runat= "Server"/>
You can also use the @ Page directive to disable view state for an entire page, which is useful when you do not postback from the page to the server: <%@ page enableviewstate= "false"%>
Note The EnableViewState property is also supported in the @ Control directive, which allows you to controls whether view state is enabled for a user control
To analyze the number of view states used by server controls on the page, enable tracing for the page and view the Viewstate columns of the Control hierarchy table (by including the Trace= "true" property in the @ Page directive)
3 Avoid unnecessary round trips to the server
Although you will most likely want to use the Time-saving and code-saving features of the Web Forms page framework as much as possible, in some cases it is inappropriate to use ASP.net server controls and postback event handling
Typically, you only need to start a round trip to the server when data is retrieved or stored. For example, validating user input from an HTML form can often be done on the client before the data is submitted to the server. If you do not need to pass information to the server to store it in a database, you should not write code that causes a round trip
If you develop custom server controls, consider having them render client code for browsers that support ECMAScript by using server controls in this way, you can significantly reduce the number of times that information is unnecessarily sent to the WEB server
Use Page.IsPostBack to avoid unnecessary processing of round trips
If you write code that processes server control postback processing, you may sometimes need to execute other code when the page is first requested, instead of code that executes when the user sends an HTML form that is contained in the page, using the Page.IsPostBack Property conditionally executes code for example, the following code shows how to create a database connection and a command that binds data to the DataGrid server control when the page is first requested
Because the Page_Load event is executed on each request, the code above checks to see if the IsPostBack property is set to False if it is, the execution code does not execute if the property is set to True
Note If you do not run this check, the behavior of the postback page does not change the code for the Page_Load event before executing the server control event, but only the results of the server control event can be rendered on the output page if you do not run the check, it will still be Page_Load Event and any server control events on the page to perform processing
4 disable it when session state is not used, and as little as possible in program development
Not all applications or pages require session state for specific users, and you should disable session state for any application or page that does not require session state
To disable the session state of a page, set the EnableSessionState property in the @ Page directive to False for example: <%@ page enablesessionstate= "false"%>
Note If the page requires access to session variables, but does not intend to create or modify them, set the EnableSessionState property in the @ Page directive to ReadOnly
To disable session state for an application, set the Mode property to off in the sessionstate configuration section of the application Web.config file, for example: <sessionstate mode= "Off"/>
5 Rational use of DataGrid (GridView in asp2.0) control
The DataGrid control comes with the most powerful data display capabilities, as well as built-in changes to the data Delete add paging and many other features if you need to simply display the data, the DataGrid is not the best choice the paging feature of the DataGrid control, how the data is stored (stored in ViewState), and so on, The resulting performance overhead is not negligible, although it is easy and quick to use for program developers
The DataList control is a lot less powerful than the DataGrid, but it's a lot more customizable. Many unique multiple-line data display or a more convenient DataGrid can achieve the function, it can basically achieve
Repeater controls have the fewest features, but their customizations are very strong because of the reduced functionality and minimal cost to the server's performance
Therefore, when you simply display the list of data, selecting the Repeater or DataList control also achieves your purpose and reduces performance overhead
Suggested selection order: Repeater then DataList final DataGrid (GridView)
6 Paging the data
Asp. NET's DataGrid (the GridView in asp2.0) has a very useful feature: paging if the DataGrid allows paging, at a moment it downloads only one page of data, and it has a data paging navigation bar that allows you to choose to browse a page, and download only one page of data at a time
But it has a small drawback that you have to bind all the data to the DataGrid, which means that your data layer must return all the data, The DataGrid then displays the data needed to filter out the current page based on the current page. If you have a 10,000-record result set to page through the DataGrid, assume that the DataGrid displays only 25 data per page, That means that every request has 9,975 data to drop. Each request returns such a large dataset, and the performance impact on the application is significant
A good solution is to write a paging stored procedure,
Such as:
Create PROCEDURE Dbo.sp_datapages
@Sql NVARCHAR (2000),
@PK varchar (50),--The primary key field name (can be any data type; No, and you don't need a table name prefix, such as: no users.id)
@Order varchar (50),--Sort method (with DESC or ASC, can be multiple combinations; cannot, and does not need to have a table name prefix, such as: Can not be users.id desc)
@Page int,
@PageSize int = 30
As
SET NOCOUNT ON
DECLARE @Str NVARCHAR (4000)
if (@Page =1)
SET @Str = ' Select top ' + CAST ((@PageSize) as VARCHAR ()) + ' * FROM (' + @Sql + ') as Table1 order by table1. ' + @Order
Else
SET @Str = ' Select top ' + CAST ((@PageSize) as VARCHAR) + ' * FROM (' + @Sql + ') as table1 Where table1. ' + @PK + ' In (Select top ' + CAST (@PageSize * (@Page-1)) as VARCHAR) + "+ @PK + ' from (' + @Sql +") as Table2 order by table 2. ' + @Order + ') Order by table1. ' + @Order
Alternatively, use the Row_number () function in SQL Server 2005
DECLARE @sql varchar (8000)
Set @sql = ' SELECT * '
+ ' from '
+' ('
+ ' Select Row_number () over (order by id desc) as rownumber,* '
+ ' from Users '
+ ' where id>0 and name<> ' '
+' )'
+ ' as Table1 '
+ ' where RowNumber between ' +str ((@page-1) * @pagesize + 1) + ' +str (@page * @pagesize)
+ ' ORDER BY id DESC '
--exec (@sql)
EXEC sp_executesql @sql
(Tip: Keep the total number of records to cache or session to improve paging performance)
7 Do not disable buffering for Web forms pages
Leave the buffer open unless there is a special reason to turn it off. Disabling buffering of Web forms pages can result in significant performance overhead
Buffer with page output enabled
If the mechanism of buffer is turned off, you can open it in the following way
To open the page output cache using a program:
Response.bufferoutput = true;
Use the @page switch to open the page output buffering mechanism:
<%@ Page Buffer = "true"%>
Use the <pages> node of the web.config or Machine.config configuration file:
<pages buffer= "true"/>
8 Setting the smart navigation property of the page
The smart navigation set to True can make the user feel significantly better. Enabling this property has little impact on the client and server side. It can intelligently refresh the parts that need to be refreshed
In most cases, do not set this property in code
When the SmartNavigation property is set to true on the @ Page directive of the. aspx file to request the page, the dynamically generated class sets the property
When an Internet Explorer 5 or later browser requests a page (or shortly), smart navigation improves the user's ability to manipulate the page by performing the following functions:
Eliminates flicker caused by navigation
Keep scrolling position when moving from one page to another
Keep the element focus between navigation
Keep only the last page status in the browser's history
Smart navigation is best suited for asp.net pages that require frequent postbacks, but whose contents do not change significantly when they are returned when deciding whether to set this property to true, consider this carefully
Three C # (or VB) program improvement
1 tostring Method with value type
When connecting strings, it is often possible to add numbers directly to a string using the "+" number. This method, while simple, can get the correct result, but because of the different data types, the number needs to be converted to a reference type through a boxing operation to be added to the string but the boxing operation has a greater performance Because when this type of processing is done, a new object is allocated in the managed heap, and the original value is copied into the newly created object
Use the ToString method of value types to avoid boxing operations, thereby improving application performance
int num=1;
String str= "Go" +num. ToString ();
2 Using the StringBuilder class
The string class object is immutable, and for the re assignment of a string object is essentially recreating a string object and assigning the new value to the object, the method ToString does not improve performance significantly
When working with strings, it is best to use the StringBuilder class, which. NET Namespaces are system.text the class does not create a new object, but instead uses a method such as Append,remove,insert to manipulate the string directly, returning the result of the operation through the ToString method
Its definition and operation statements are as follows:
int num;
System.Text.StringBuilder str = new System.Text.StringBuilder (); Creating a String
Str. Append (Num. ToString ()); Add Numeric num
Response.Write (str. ToString); Show action Results
3 redirect between pages of the same application using the HttpServerUtility.Transfer method
Using Server.Transfer syntax, this method is used in the page to avoid unnecessary client redirection (Response.Redirect)
4 Avoid using ArrayList
Because any object added to the ArrayList is a System.Object type, when the data is removed from the ArrayList, unpacking back to the actual type suggests using a custom collection type instead of Arraylistasp 2.0 provides a new type, called Generics, This is a strong type that uses a generic collection to avoid the occurrence of enclosures and unboxing, improving performance
5 using Hashtale instead of other dictionary collection types
(such as stringdictionary,namevaluecollection,hybridcollection), you can use Hashtable when storing small amounts of data.
6 Declare a constant for a string container, and do not encapsulate the character directly in the double quote ""
Avoid
MyObject obj = new MyObject ();
Obj. Status = "ACTIVE";
Recommended
Const string c_status = "ACTIVE";
MyObject obj = new MyObject ();
Obj. Status = C_status;
7 Do not use ToUpper (), ToLower () conversion string for comparison, with String.Compare instead, it can ignore case comparison.
Cases:
Const string c_value = "COMPARE";
if (String.Compare (svariable, C_value, true) = = 0)
{
Console.Write ("Same");
}
You can also use str = = String.Empty or Str.length = 0 To determine whether it is empty (pay attention to the length of the input data to prevent SQL injection attacks)
Comparing the length property of a string object to 0 is the quickest way to avoid unnecessary calls to ToUpper or ToLower methods
8 type Conversion Int32.TryParse () is superior to Int32.Parse () than Convert.ToInt32 ()
Suggestions. NET1.1 with Int32.Parse (); NET2.0 with Int32.TryParse ()
Because:
Convert.ToInt32 will be the final analytical work agent to int32.parse;
Int32.Parse will be the final analytical work agent to number.parseint32;
Int32.TryParse will give the final analytical work to Number.tryparseint32
9 If you simply read data from an XML object and replace XmlDocument with a read-only XPathDocument, you can improve performance
12 when using try...catch...finally, you want to release the resources that are occupied in the finally, such as connection, file flow, etc.
Otherwise, the resource occupied after the catch to the error cannot be freed
Try
{}
Catch
{}
Finally
{
Conntion.close ();
}
13 Do not use exception control program flow
Some programmers may use exceptions to implement some process control, such as:
But in fact, exception is very consuming system performance unless necessary, exception control should not be used to implement the code above should be written as:
if (num!=0)
Result=100/num;
Else
result=0;
14 Avoid using recursive calls and nested loops, they can severely affect performance and are used when they have to be used.
15 disabling VB and JScript Dynamic Data types
Variable data types should always be shown, which saves the execution time of the program in the past, one of the reasons developers like to use Visual Basicvbscript and JScript is that they do not require an explicit type declaration for a variable-type nature. And be able to simply create them by using the transform will automatically execute when assigned from one type to another however, this convenience can greatly impair the performance of the application
Such as:
For best performance, when declaring a JScript. NET variable, assign it a type, for example, Var a:string;
Four use caching
1 using output cache caching data
Providing caching functionality is a very powerful feature of ASP that has seen some metrics say: ASP program performance than Sun JSP application performance several times faster, in fact, the evaluation program is very important to use a lot of ASP's caching capabilities
If your component is to run in an ASP application, All you have to do is refer System.Web.dll to your project and then use the Httpruntime.cache property to access the cache (or you can access it via Page.cache or Httpcontext.cache)
There are several rules for caching data first, data may be used frequently, this data can be cached second, the data access frequency is very high, or a data access frequency is not high, but it has a long life cycle, such data is best to cache the third is a frequently overlooked problem, Sometimes we cache too much data, usually on a X86 machine, if you want to cache more than 800M of data, there will be a memory overflow error so that the cache is limited in other words, you should estimate the size of the cache set, the size of the cache set limit within 10, otherwise it may be problematic in the ASP , a memory overflow error can also be reported if the cache is too large, especially if a large dataset object is cached
Here are a few important caching mechanisms you must understand first of all, caching implements the most recent use principle (a least-recently-used algorithm), and when the cache is low, it automatically forces the removal of those unwanted caches followed by Conditional dependency Mandatory purge principle (expiration dependencies), conditions can be time, keywords, and files are most commonly used in time as a condition to add a stronger condition in asp2.0, that is, database conditions force clear caching when data in the database changes
There are two points to be aware of using the ASP.net caching mechanism first, do not cache too many items cache each item has a cost, especially in memory usage do not cache easily recalculated and infrequently used items secondly, the expiration of the cached items will not be too short. Items that expire quickly can cause unnecessary turnover in the cache, And often leads to more code cleanup and garbage collection if you are concerned about this issue, monitor the Cache total turnover that is associated with the ASP.net applications performance object Rate performance counter high turnover may indicate a problem. This is also called memory pressure, especially if the item is removed before it expires.
Remember:
Should:
You should cache data that is often accessed at the same time with little frequency of change
You should cache the settings or objects that are used by the entire application, but these settings and objects must not change during their lifetimes
No:
Do not cache personal information if you cache personal information, other people can easily access this information
Do not cache pages that contain values based on time, or viewers will not understand why time is always lagging
Do not cache objects that users can modify at any time, such as a shopping cart
The 2.0 is:
<% @OutputCache varybyparam=classid;page duration=3600%>
You can effectively use the first request generated page output cache content, 3,600 seconds to regenerate a page content this technology is also using some low-level cache API to achieve with the page output cache several parameters can be configured, as described above VaryByParams parameters, This parameter indicates when the condition for which the heavy output is triggered, or it can specify that the output be cached in HTTP GET or HTTP Post request mode
For example, when we set the parameter to Varybyparams=classid;page, the output of the default.aspx?classid=3&page=1 request is cached without parameters. With none if there are more than one parameter passed, even if the string parameter is the same as the value, but the order is different, then when the page is requested, a different cache page will be generated, for example Default.aspx?first=1&last=1 and default.aspx?last=1&first=1 Although the parameters are exactly the same, but because of the different arrangement order, will generate two different cache pages
Many people don't realize that when the page output is cached, the ASP also generates an HTTP header set (HTTP header) stored in a downstream cache server that can be used for Microsoft Internet security and speed up the response of the server when the header of the HTTP cache is reset, the requested content is slowed down in the network resource, and when the client requests the content again, the content is no longer available from the source server, and the content is obtained directly from the cache
Although using page output caching does not improve your application performance, it reduces the number of times that cached page content is loaded from the server. Of course, this is limited to caching pages that anonymous users can access because once the page is cached, the authorization operation can no longer be performed.
2 Fragment caching (caching a portion of a page, such as a user control)
In ASP, in addition to the use of caching in the page scope, you can also use the output cache parameter for user control to implement the caching of the users ' controls as well, A control of the same type on a page can also have multiple different caches that can be implemented based on parameters different cache page cache and fragment cache can be used simultaneously
3) Data caching
Data caching is a powerful and very simple caching mechanism that can store objects in a cache for each application, which can be invoked based on HTTP requests, but are private in different applications
The data cache is implemented by the cache class. When the application is established, a cache class is built at the same time, and the lifetime of the cached instance is the lifecycle of the application, it will be rebuilt with the application rerun, through the cache class method, we can put the data object into the buffer, Then search for and use these objects by keyword matching
The cache class uses an excuse to control all the content that needs to be cached, including the timing and caching of the cache, and you can add the cached object in the following ways:
cache[keyword] = value of keyword;
The following method is then used to access this object:
Note the difference between Page.cache and HttpContext.Current.Cache:
They refer to the same object, in page, with Page.cache, if used in Global.asax or in their own class: HttpContext.Current.Cache, in some cases, because it has no HttpContext, use Httpruntime.cache
Expired dependency conditions for data caching
In a sense, cache and application are the same, they're all public objects. In order to achieve a balance between caching and data availability, the cache expiration policy can be reasonably set as needed
File dependencies
Cache.Insert (Mydata, Source, New cachedependency (Server.MapPath (Authors.xml)));
The meaning of this code is that the cache MyData always works when the Authors.xml file is not changed
Time dependent
Set to expire after 1 hours, this is an absolute expiration
Cache.Insert (Mydata,source,null, DateTime.Now.AddHours (1), TimeSpan.Zero);
Relative Expiration dependency
Cache expires 20 minutes after DataSet is no longer changed
Cache.Insert (Mydata,source,null,datetime.maxvalue,timespan.fromminutes (20));
An example:
Absolutely expired ... (used to hold a common, data-small-Amount data object, can be any object)
Set up
if (system.web.httpcontext.current.cache["OK"] = = null)
System.Web.HttpContext.Current.Cache.Insert ("OK", "data", NULL, DateTime.Now.AddSeconds (300), System.Web.Caching.Cache.NoSlidingExpiration);
Read
if (system.web.httpcontext.current.cache["OK"]!=null)
This. Response.Write (convert.tostring) (System.Web.HttpContext.Current.Cache.Get ("OK"));
Finally, note:
Caching cannot be used during Web Form debugging, otherwise, your modifications to the page will not be explicitly loaded until the cache expires. The correct way to do this is to give the page user control or object that needs to be cached, plus the caching instruction, to finally establish the deployment and installation project, and build the installation packet. This time you can go to the server to publish your product
Although the cache API is designed to hold data for a certain period of time, the pre-request cache only holds the content of a request for a certain period. If a request is accessed at a high frequency, and the request only needs to be extracted, applied, modified, or updated, then the request can be cached. Let's give an example to illustrate
In the CS forum application, server controls for each page require custom data that determines its skin (skin) to determine which style sheet and some of its personalized items may be stored for a long time, or otherwise, such as the control's skin data. It only needs to be applied once, and then it can be used all the time
To implement a pre-request cache, an instance of the HttpContext class is created in each request using the ASP's HttpContext class, The HttpContext class has an items collection property that can be accessed anywhere during the request by the HttpContext.Current property, and all objects and data are added to this collection during the request, and you can use the cache cache to access the high frequency data you use Httpcon Text. Items cache the underlying data that each request uses the logic behind it is simple: we add a data to the Httpcontext.items and then read the data from it
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.