Check your code to access the database to see if there are any requests to return multiple times. Each round-trip reduces the number of times that your application can respond to requests per second. By returning multiple result sets in a single database request, you can reduce the time to communicate with the database, make your system extensible, or reduce the amount of work that the database server responds to requests.
If you use a dynamic SQL statement to return multiple datasets, it would be better to use stored procedures instead of dynamic SQL statements. This is somewhat controversial as to whether to write business logic into stored procedures. But I think it's a good thing to write the business logic into the stored procedure to limit the size of the return result set, to reduce the traffic of the network data, and not to filter the data at the logical level.
Returns a strongly typed business object using the ExecuteReader method of the SqlCommand object, and then calls the NextResult method to move the dataset pointer to locate the dataset. Returns multiple ArrayList strongly typed objects. Just returning the data you need from the database can greatly reduce the amount of memory your server consumes.
second, the data paging
Asp. NET's DataGrid has a very useful feature: paging. If the DataGrid allows paging, at a moment it downloads only one page of data, and it has a data paging navigation bar that allows you to choose to browse a page and download only one page at a time.
But it has a small drawback that you have to bind all the data to the DataGrid. That is, your data layer must return all the data, and the DataGrid will then display the data needed to filter out the current page based on the current page. If you have a 10,000-record result set that is paginated with a DataGrid, assuming that the DataGrid displays only 25 data per page, it means that each request has 9,975 data to discard. Each request returns such a large dataset, and the performance impact on the application is significant.
A good solution is to write a paging stored procedure such as a paging stored procedure on the Northwind database Orders table. You just have to pass the current page number, the number of bars displayed per page two parameters come in, stored procedures will return the corresponding results.
On the server side, a pagination-specific control is written to process the paging of data, and two result sets are returned in one stored procedure: The total number of data records and the required result set.
The total number of records returned depends on the query being executed, for example, a where condition can limit the size of the returned result set. Because the total number of pages must be calculated based on the size of the dataset record in the paging interface, you must return the records of the result set. For example, if you have 1 million records, and if you can filter to return only 1000 records with a where condition, the paging logic of the stored procedure should know to return the data that needs to be displayed.
Third, Connection pool
Using TCP to connect to your application and database is an expensive thing (time-consuming), and Microsoft developers can use the connection pool repeatedly using the database connection. The connection pool creates a new TCP connection only if there are no valid connections, compared to each request using TCP to connect to the database. When a connection is closed, it is placed in the pool, and it still maintains a connection to the database, which reduces the number of TCP connections to the database.
Of course, you should pay attention to those connections that you forgot to turn off, and you should close them every time you run out of connections. It is to be emphasized that whoever says that the GC (garbage collector) in the. NET Framework will always close your connection explicitly after you have exhausted the connection object by calling the closing or Dispose method of the Connection object. Do not expect the CLR to turn off the connection in the time you think, although the CLR eventually destroys the object and closes the edges, but we are not sure when it will be done.
To use connection pooling optimization, there are two rules, first, open the connection, process the data, and then close the connection. If you have to turn the connection on or off more than once per request, it's better to always open a side-link and upload it to each method. Second, use the same connection string (or with the same user ID, when you use integrated authentication). If you do not use the same connection string, if you use a connection string based on the logged-on user, this will not take advantage of the connection pool's optimization function. If you are using an integrated argument, because you have a lot of users, you can't make the most of the connection pool optimization. The. NET CLR provides a data performance counter that is useful when we need to track the performance characteristics of a program, including the connection pool tracking.
No matter when your application is connected to another machine's resources, such as a database, you should focus on optimizing the time you spend on your resources, the time it takes to receive and send data, and the number of times to return. Optimizing every processing point in your application (process hop) is the starting point for improving the performance of your application.
The application layer contains the logic for connecting to the data tier, sending data to the corresponding class, and business processing. For example, in community server, to assemble a forums or threads collection, and then apply the business logic, such as authorization, and more importantly, the caching logic is done here.
four, ASP. NET Cache API
The first thing you need to do before you write an application is to maximize the use of the ASP.net caching capabilities in your application.
If your component is to run in a asp.net application, you can simply refer to the System.Web.dll to your project. The cache can then be accessed using the Httpruntime.cache property (also accessible via Page.cache or Httpcontext.cache).
There are several rules for caching data. First, data may be used frequently, and this data can be cached. Second, the data access frequency is very high, or a data access frequency is not high, but it has a long life cycle, such data is best to cache. The third is a frequently overlooked problem, sometimes we cache too much data, usually on a X86 machine, if you want to cache more than 800M of data, there will be memory overflow error. So the cache is limited. In other words, you should estimate the size of the cache set, limit the size of the cache set to less than 10, otherwise it may be problematic. In asp.net, a memory overflow error can also be reported if the cache is too large, especially if the large dataset object is cached.
Here are a few important caching mechanisms you must understand. The first is that the cache implements the "most recently used" principle (a least-recently-used algorithm), which automatically forces the removal of unwanted caches when the cache is low. Second, the conditional dependency mandatory purge principle (expiration dependencies), which can be time, keywords, and files. Taking time as a condition is the most commonly used. Adding a stronger condition to the asp.net2.0 is the database condition. When the data in the database changes, the cache is forced to clear
v. Pre-Request caching
In the front, we only make a small performance improvement for some places, and we can get a big performance boost, so it's good to use the pre request cache to improve the performance of the program.
While the cache API is designed to hold data for a certain period of time, the pre-request cache is simply the preservation of the content of a request at a certain age. If a request has a high frequency of access, and the request only needs to be extracted, applied, modified or updated data once. You can then cache the request. Let us give an example to illustrate.
In the BS Forum application, server controls for each page require custom data that determines its skin (skin) to determine which style sheet and other personalized items to use. Some of this data may take a long time to save, some time otherwise, such as the control of the skin data, it only needs to apply once, and then can be used.
To implement a pre-request cache, an instance of the HttpContext class is created in each request with the ASP.net HttpContext class, and can be accessed from anywhere in the request through the HttpContext.Current property. The HttpContext class has an Items collection property, and all objects and data are added to the collection for caching during the request. As with the cache cache access data, you can use Httpcontext.items to cache the underlying data for each request. The logic behind it is simple: we add a data to the Httpcontext.items and then read the data from it.
Six, background processing
Your application should run very fast through the above method, right? At some point, however, a time-consuming task may be performed in a single request in a program. such as sending a message or checking the correctness of the submitted data.
When we set the ASP.net forums 1 into CS, we found that submitting a new post would be very slow. Every time you add a post, the application should first check whether this post is repeated, and then use the "BadWord" Filter to filter, check the image attached, the index of the post, add it to the appropriate queue, verify its attachment, and finally, send an email to its subscriber mail box. Obviously, this is a lot of work.
The result is that it spends a lot of time indexing and sending emails. Indexing a post is a time-consuming operation, and sending an email to a subscription requires connecting to the SMTP service and sending a message to each subscriber, which will take longer to send as subscribers increase.
Indexing and sending mail does not need to be triggered on every request, ideally, we want to handle these operations in bulk, sending only 25 emails each time, or sending all new emails every 5 minutes. We decided to use the same code as the database prototype cache, but failed, so we had to go back to Vs.net 2005.
We found the Timer class under the System.Threading namespace, which is very useful, but few people know that Web developers are less likely to know. Once he builds an instance of the class, the timer class calls the specified callback function from a thread in the thread pool at a specified time. This means that your ASP.net application can run without a request. This is the solution to be processed later. You can make indexing and email work run in the background, rather than having to do it every time a request is made.
There are two problems with the technology running in the background, the first is that when your application domain is unloaded, the Timer class instance stops running. That is, the callback method is not invoked. Also, because there are a lot of threads running in every process of the CLR, it's very difficult for a timer to get a thread to execute it, or it can execute it, but it will delay. The asp.net layer should use this technique as little as possible to reduce the number of threads in the process, or just ask for a small portion of the thread. Of course, if you have a lot of asynchronous work, you can only use it.
vii. page output caching and proxy services
asp.net is your interface layer (or should be), it contains pages, user controls, server controls (HttpHandlers and httpmodules), and what they generate. If you have a asp.net page for outputting html,xml,imgae or other data, and you are using code to generate the same output for each request, you will need to consider caching the page output.
Simply copy the following line of code to your page to achieve this:
Can effectively use the first request generated page output cached content, 60 seconds later to regenerate a page content. This technology is actually using some low-level cache API to achieve. There are several parameters that can be configured with the page output cache, such as the VaryByParams parameter mentioned above, which indicates when a condition is triggered for a heavy output, or it can specify that the output be cached in HTTP GET or HTTP Post request mode. For example, when we set the parameter to Varybyparams= "a", default.aspx? Report=1 or default.aspx? The output of the report=2 request will be cached. The value of a parameter can be multiple separate arguments with semicolons.
Many people are unaware that when using page output caching, ASP.net also generates an HTTP header set (HTTP header) stored in a downstream cache server that can be used in Microsoft Internet Security and speed up server responsiveness. When the header of the HTTP cache is reset, the requested content is slowed down in the network resource, and when the client requests the content again, the content is no longer available from the source server, and the content is obtained directly from the cache.
Although using page output caching does not improve your application performance, it reduces the number of times that cached page content is loaded from the server. Of course, this is limited to caching pages that anonymous users can access. Because once the page is cached, the authorization operation can no longer be performed.
Viii. the kernel Caching with IIS6.0
If your application doesn't work in IIS6.0 (Windows Server 2003), you're losing some of the best ways to improve your application's performance. In the seventh method, I talked about ways to improve the performance of your application with page output caching. In IIS5.0, when a request arrives in IIS, IIS transfers it to ASP.net, and when the page output cache is applied, the HttpHandler in ASP.net receives the request, and HttpHandler extracts the content from the cache and returns it.
If you're using IIS6.0, it has a very good function to be kernel Caching, and you don't have to modify any of the code in the ASP.net program. When ASP.net receives a cached request, the kernel cache of IIS obtains a copy of it from the cache. When a request is made from the network, the kernel layer gets the request, and if the request is cached, the cached data is returned directly, which is completed. This means that when you use the kernel caching of IIS to cache page output, you gain incredible performance gains. In the development of Vs.net 2005 asp.net, I was a special negative asp.net performance of the program manager, my programmer used this method, I looked at all the daily table data, found that the kernel model caching is always the fastest result. A common feature of these is that the network has a large number of requests and responses, but IIS consumes only 5% of the CPU resources. This is amazing. There are many reasons to use IIS6.0, but kernel cashing is the best one.
compressed data with gzip
Unless your CPU occupancy rate is too high, it is necessary to use the skills to improve server performance. The method of compressing data with gzip can reduce the amount of data you send to the server, also can improve the speed of the page, and also reduce the traffic of the network. How to better compress the data depends on the data you want to send, as well as the client's browser support (IIS to use gzip compressed data sent to the client, the client to support gzip to resolve, IE6.0 and Firefox support). This allows your server to respond to requests more than once per second, as well as reducing the amount of data sent to the response and sending more requests.
The good news is that gzip compression has been integrated into the IIS6.0, which is better than IIS5.0 gzip. Unfortunately, enabling gzip compression in IIS6.0, you cannot set it in the IIS6.0 properties dialog. The IIS development team developed the GZIP compression feature, but they forgot to make it easy for administrators to enable it in the admin window. To enable gzip compression, you can only modify its configuration in a IIS6.0 XML configuration file.
In addition to reading this article, look at Brad Wilson's IIS6 compression: http://www.dotnetdevs.com/articles/IIS6compression.aspx; There is also an article that introduces the basics of ASPX compression, Enable ASPX Compression in IIS. Note, however, that dynamic compression and kernel cashing are mutually exclusive in IIS6.
10, server control viewstate
ViewState is an attribute in asp.net that is used to save a state value for a generated page in a hidden field. When the page is uploaded back to the server, the server resolves, validates, and applies the data in ViewState to restore the page's control tree. ViewState is a very useful feature that can persist the state of a client without using cookies or server memory. Most server controls use ViewState to persist the state values of those elements that interact with the user in the page. For example, the page number used to hold the current page for pagination.
The use of ViewState will bring some negative effects. First, it enlarges the server's response and requests the time. Second, the time to serialize and deserialize data is increased for each postback. Finally, it consumes more memory from the server.
Many server controls tend to use ViewState, such as the DataGrid, and sometimes there is no need to use it. By default it is allowed to use ViewState, if you do not want to use ViewState, you can turn it off at the control or page level. In the control, you just set the EnableViewState property to false; You can also set it in the page to extend its scope to the entire page: <%@ page enableviewstate= "false"%> If the page does not need to be returned or each request page simply renders the control. You should turn off the viewstate at the page level.
1. C # language aspect
1.1 Garbage collection
Garbage collection frees the work of hand-managed objects and improves the robustness of programs, but the side effect is that program code may become arbitrary for object creation.
1.1.1 Avoid unnecessary object creation
Because of the high cost of garbage collection, one of the basic principles that C # program development should follow is to avoid unnecessary object creation. Some common scenarios are listed below.
1.1.1.1 avoid looping to create objects ★
If the object does not change state with each loop, creating the object repeatedly in the loop can cause performance loss. The efficient approach is to refer the object to the loop outside creation.
1.1.1.2 to create an object in a logical branch
If an object is used only in some logical branches, then only the object should be created in that logical branch.
1.1.1.3 use constants to avoid creating objects
Code such as new Decimal (0) should not appear in the program, which causes small objects to be created and recycled frequently, and the correct approach is to use the Decimal.zero constant. When we design our own classes, we can also learn this design technique and apply it to a similar scenario.
1.1.1.4 use StringBuilder to do string concatenation
1.1.2 Do not use NULL destructor ★
If the class contains a destructor, a reference to the object is added to the Finalize queue when the object is created to ensure that the Finalize method can still be invoked when the object cannot be reached. During the garbage collector's run, a low-priority thread is started to process the queue. In contrast, objects without destructors do not have these costs. If the destructor is empty, this consumption is meaningless and can only lead to performance degradation! Therefore, do not use an empty destructor.
In fact, many of them have included processing code in the destructor, but have been commented out or deleted for various reasons, leaving only an empty shell, which should be noted that the destructor itself is commented out or deleted.
1.1.3 Implement IDisposable interface
Garbage collection actually supports only managed internal collections, and it is problematic for other unmanaged resources, such as Window GDI handles or database connections, to release these resources in a destructor. The reason is that garbage collection relies on internal tension, although the database connection may be on the verge of exhaustion, but if there is sufficient memory, garbage collection will not run.
C # 's IDisposable interface is a mechanism for explicitly releasing resources. By providing a using statement, it also simplifies usage (the compiler automatically generates a try ... finally block and calls the Dispose method in a finally block). For an unmanaged resource object, the IDisposable interface should be implemented to ensure that the resource is released in a timely manner, once it has exceeded the scope of the using statement. This is very useful for the construction of robust and performance-good programs!
In order to prevent the object's Dispose method from being invoked, a destructor is generally provided, both of which invoke a public method that handles resource release. Also, the Dispose method should call System.GC.SuppressFinalize (this), telling the garbage collector that it is no longer necessary to process the Finalize method.
1.2 String operation
1.2.1 Use StringBuilder to do string concatenation
String is a invariant class, and using the + action connection string will result in creating a new string. If the number of string connections is not fixed, for example in a loop, you should use the StringBuilder class to do string concatenation work. Because there is a stringbuffer inside the StringBuilder, the join operation does not allocate a new string space at a time. The new buffer space is applied only if the concatenated string exceeds the buffer size. The typical code is as follows: StringBuilder SB = new StringBuilder (256);
for (int i = 0; i < Results.count i + +)
{
Sb. Append (Results[i]);
}
If the number of connections is fixed and only a few times, it should be connected directly with the + number, keeping the program simple and easy to read. In fact, the compiler has done optimizations that will call the String.Concat method of the number of different parameters according to the plus number of times. For example: String str = str1 + str2 + str3 + str4;
will be compiled into String.Concat (str1, str2, STR3, STR4). Within this method, the total String length is computed, assigned only once, and is not allocated three times as is normally imagined. As an empirical value, you should use StringBuilder when the string concatenation operation reaches more than 10 times.
Here's one detail to note: The default value of StringBuilder internal Buffer is 16, which is too small. According to the StringBuilder use scene, the Buffer must be redistributed. Experience values generally use 256 as the initial value of the Buffer. Of course, if you can calculate the length of the resulting string, you should set the initial value of the Buffer. The initial length of the Buffer is set to 256 using the new StringBuilder (256).
1.2.2 Avoid unnecessary calls to ToUpper or ToLower methods
A string is a invariant class, and invoking the ToUpper or ToLower method causes a new string to be created. If called frequently, it causes a string object to be created frequently. This goes against the basic principle of "avoid frequent object creation" as mentioned earlier.
For example, BOOL. The parse method itself is already case insensitive, and the ToLower method is not invoked when invoked.
Another very common scenario is string comparisons. The efficient approach is to use the Compare method, which can do a case-insensitive comparison without creating a new string.
There is also a situation when the use of HashTable, sometimes can not guarantee that the case of the delivery key is expected to match, often the key cast to uppercase or lowercase methods. In fact, HashTable has a different form of construction and fully supports the use of Key:new HashTable (stringcomparer.ordinalignorecase), which ignores the case.
1.2.3 the fastest way to compare empty strings
Comparing the length property of a string object to 0 is the quickest method: if (str). Length = = 0)
Second is the comparison with String.Empty constants or empty strings: if (str = = String.Empty) or if (str = "")
Note: C # at compile time will put all the string constants declared in the assembly in the reserved pool (intern pool), and the same constants will not be allocated repeatedly.
More than 1.3 threads
1.3.1 Thread Synchronization
Thread synchronization is the first thing to consider when writing multithreaded programs. C # provides synchronization with Monitor, mutex, AutoResetEvent, and ManualResetEvent objects to wrap the underlying synchronization mechanisms of the Win32, mutexes, and event objects respectively. C # also provides a lock statement that is easy to use, and the compiler automatically generates the appropriate Monitor.Enter and Monitor.Exit calls.
1.3.1.1 synchronization Granularity
The synchronization granularity can be the entire method, or it can be a piece of code in the method. Specifying the MethodImplOptions.Synchronized property for a method synchronizes the markup to the entire method. For example: [MethodImpl (methodimploptions.synchronized)]
public static Serialmanager getinstance ()
{
if (instance = null)
{
Instance = new Serialmanager ();
}
return instance;
}
In general, you should reduce the scope of synchronization to enable the system to achieve better performance. It is not a good idea to simply mark the entire method as synchronized, unless you can determine that each code in the method needs to be protected synchronously.
1.3.1.2 Synchronization Policy
Synchronization with lock allows the synchronization object to select Type, this, or a member variable that is specifically constructed for synchronization purposes.
Avoid locking type★
Locking a type object affects all instances of that type in the same process, which can lead not only to serious performance problems but also to some unexpected behavior. AppDomain This is a very bad habit. Even for a type that contains only the static method, an extra static member variable should be constructed so that the member variable acts as a locked object.
Avoid locking this
Locking this will affect all methods of the instance. Suppose the object obj has a and B two methods, where a method uses lock (this) to set the synchronization protection for a piece of code in the method. Now, for some reason, the B method also starts using lock (this) to set up synchronization protection, and may be for a completely different purpose. Thus, a method is disturbed and its behavior may be unpredictable. Therefore, as a good habit, it is recommended that you avoid using the lock (this) method.
Use a member variable that is specifically constructed for synchronization purposes
This is the recommended practice. Is the new object object, which is used only for synchronization purposes.
If you have multiple methods that require synchronization and have different purposes, you can create several synchronization member variables for each.
1.3.1.4 Set of contract steps
C # provides two convenient synchronization mechanisms for various collection types: the Synchronized wrapper and the SyncRoot property.
Creates and initializes a new ArrayList
ArrayList Myal = new ArrayList ();
Myal.add ("the");
Myal.add ("quick");
Myal.add ("Brown");
Myal.add ("Fox");
Creates a synchronized wrapper around the ArrayList
Calling the Synchronized method returns an identical collection object that guarantees that all operations are thread safe. Consider mysyncdal[0] = mysyncdal[0] + "TEST" this statement, read and write altogether to use two locks. Generally speaking, the efficiency is not high. It is recommended that you use the SyncRoot property for finer control.
1.3.2 Use threadstatic instead of namedataslot★
The Thread.getdata and Thread.setdata methods for accessing Namedataslot require thread synchronization, involving two locks: one is the Localdatastore.setdata method requires locking at the AppDomain level, and the other is THR The Eadnative.getdomainlocalstore method requires locking at the process level. If some of the underlying underlying services use Namedataslot, this can cause serious scalability problems for the system.
The way to circumvent this problem is to use the threadstatic variable. Examples are as follows: public sealed class Invokecontext
{
[ThreadStatic]
private static Invokecontext current;
Private Hashtable maps = new Hashtable ();
}
1.3.3 multi-Threading programming skills
1.3.3.1 use Double Check technology to create objects internal IDictionary KeyTable
{
Get
{
if (this. _keytable = null)
{
Lock (base. _lock)
{
if (this. _keytable = null)
{
this. _keytable = new Hashtable ();
}
}
}
return this. _keytable;
}
}
Creating a single Instance object is a common programming scenario. The object is generally created directly after the lock statement, but it is not safe enough. Because before lock locks an object, it is possible that more than one thread has entered into the first if statement. If you do not add the second if statement, the singleton object is created repeatedly and the new instance replaces the old instance. If you have data in a single instance object that is not allowed to be corrupted or otherwise, consider using the Double Check technique.
The CLR guarantees that all objects are initialized before they are accessed by zeroing out the allocated memory. Therefore, you do not need to reinitialize the variable to 0, false, or null.
It should be noted that the local variables in the method are not distributed from the heap but from the stack, so C # does not do 0 work. If you use a local variable that is not assigned a value, the alert is reported during compilation. Do not have this impression of all classes of member variables also do assignment action, the mechanism of the two completely different!
1.4.2 ValueType and Referencetype
1.4.2.1 passing value type parameters as references
Value types are allocated from the call stack, and reference types are allocated from the managed heap. When a value type is used as a method parameter, the default is to copy the parameter value, which offsets the advantage of the efficiency of the allocation of values. As a basic technique, passing value type parameters in a reference way can improve performance.
1.4.2.2 provides the Equals method for ValueType
The Valuetype.equals method of the. NET default implementation uses the reflection technique, which relies on reflection to get all the member variable values to be compared, which is extremely inefficient. If we write a value object whose Equals method is to be used (for example, to place a value object in HashTable), then the Equals method should be overloaded. public struct Rectangle
C # can be automatically converted between value types and reference types by boxing and unboxing. Boxing requires allocating objects from the heap and copying values, with certain performance costs. If this process occurs in a loop or is called frequently as the underlying method, you should be wary of cumulative effects.
A common scenario occurs when a collection type is used. For example: ArrayList al = new ArrayList ();
for (int i = 0; i < 1000 i + +)
{
Al. ADD (i); Implicitly boxed because Add () takes an object
}
int f = (int) al[0]; The element is unboxed
1.5 Exception Handling
Anomaly is also a typical feature of modern languages. Exceptions are mandatory (not dependent on forgetting to write code to check for error codes), strongly typed, and with rich exception information (such as call stacks) compared to the traditional way of checking for error codes.
1.5.1 do not eat abnormal ★
The most important principle of exception handling is: Don't eat exceptions. This problem has nothing to do with performance, but is important for writing robust and easily-error-prone programs. The principle, in other words, is not to capture the exceptions that you cannot handle.
Eating an exception is a bad habit because you eliminate the clue to the problem. Once an error occurs, locating the problem will be very difficult. In addition to this way of completely eating exceptions, it is equally inappropriate to write exception information to the log file but do not do more processing.
1.5.2 Don't eat exception information ★
Some code throws an exception, but it eats the exception information.
It is the responsibility of the programmer to disclose detailed information for an exception. If you cannot append richer and more humane content to the original exception message, it is much better to expose the original exception information directly. Never eat an exception.
1.5.3 avoid unnecessary throws of exceptions
Throwing exceptions and catching exceptions is a much more consuming operation, and, where possible, you should avoid throwing unnecessary exceptions by perfecting the program logic. One of the tendencies associated with this is to use exceptions to control the processing logic. Although this may be a more elegant solution for a very small number of cases, it should generally be avoided.
1.5.4 Avoid unnecessary throw-back exceptions
It is reasonable to wrap the object of the exception (that is, to wrap the new exception after adding more information). But there are a lot of code, the catch exception does not do any processing on the throw again, which will needlessly increase the catch of the exception and throw the exception of the consumption, the performance is harmful.
1.6 Reflection
Reflection is a very basic technique that converts a static binding during compilation to a dynamic binding that is deferred to the runtime. In many scenarios (especially the design of the class framework), a flexible and extensible architecture is available. However, the problem is that dynamic binding can cause significant performance damage compared to static bindings.
1.6.1 Reflection Classification
Type comparison: Types of judgments, mainly include the IS and typeof two operators and GetType calls on object instances. This is the most lightweight consumption and can be without consideration for optimization issues. Note The typeof operator is faster than the GetType method on an object instance, as long as it is possible to use the TypeOf operator preferentially.
Member enumeration: Members enumeration that is used to access reflection-related metadata information, such as Assembly.getmodule, Module.gettype, isinterface on Type objects, IsPublic, GetMethod, GetMethods, GetProperty, GetProperties, GetConstructor calls, etc. Although the metadata is cached by the CLR, the invocation consumption of some methods is still very large, but the frequency of such method calls is not very high, so the overall performance loss level is moderate.
Member invocation: Members invocation, including dynamic creation of objects and dynamic invocation of object methods, mainly Activator.CreateInstance, Type.InvokeMember and so on.
1.6.2 Dynamically Create objects
C # mainly supports 5 ways to create objects dynamically:
The quickest way is to approach 3, the difference with Direct Create is within an order of magnitude, about 7 times times the level. Other ways, at least 40 times times more, the slowest is the way 4, three orders of magnitude slow.
1.6.3 Dynamic Method Invocation
The method invocation is divided into two kinds, the early binding of compilation period and the dynamic binding of runtime, called Early-bound invocation and Late-bound invocation. Early-bound invocation can be subdivided into direct-call, Interface-call, and Delegate-call. Late-bound invocation are mainly Type.InvokeMember and MethodBase.Invoke, and can also be achieved by using LCG (lightweight Code Generation) technology generates IL code to implement dynamic invocation.
From the test results, compared to direct call,type.invokemember to be close to three orders of magnitude, although MethodBase.Invoke is three times times faster than Type.InvokeMember, but still 270 times times slower than direct call. Visible dynamic method invocation performance is very low. Our advice is: don't use it unless you want to meet specific needs!
1.6.4 Recommended Principles of Use
Mode
1. If possible, avoid using reflection and dynamic binding
2. Transforming dynamic binding to early binding using interface invocation
3. Dynamically create objects using Activator.CreateInstance (Type)
4. Using the typeof operator instead of the GetType call
Anti-mode
1. When type is obtained, the assembly.createinstance (type is used). FullName)
1.7 Basic Code Tips
Here is a description of some of the basic code techniques that can be used to improve performance in some scenarios. For code that is in a critical path, it makes sense to do this kind of optimization. Ordinary code may not be required, but it makes sense to develop a good habit.
1.7.1 Cyclic writing
The conditions of the cycle can be recorded with local variables. Local variables are often optimized by compilers to directly use registers, which are faster than normal variables allocated from the heap or stack. If you are accessing complex computing properties, the elevation effect will be more pronounced. for (int i = 0, j = collection. Getindexof (item); I < J; i++)
It is important to note that this type of writing does not make sense for the Count property of the CLR collection class because the compiler has made special optimizations in this way.
1.7.2 Assembly String
It is inefficient to do so after assembling. Some methods have a cycle length of 1 in most cases, and the inefficiency of this writing is even more pronounced: public static string ToString (Metadatakey EntityKey)
{
String str = "";
Object [] Vals = entitykey.values;
for (int i = 0; i < Vals. Length; i + +)
{
STR + + "," + vals[i]. ToString ();
}
return str = = ""? "": Str. Remove (0, 1);
}
The following wording is recommended: if (str). Length = = 0)
str = vals[i]. ToString ();
Else
STR + + "," + vals[i]. ToString ();
In fact, this kind of writing is very natural, and very efficient, there is no need to use a Remove method around.
1.7.3 avoid retrieving collection elements two times
When getting collection elements, it is sometimes necessary to check whether an element exists. The usual practice is to call the ContainsKey (or contains) method first, and then get the collection element. This is a very logical formulation.
However, if you consider efficiency, you can first get the object directly, and then determine whether the object is null to see if the element exists. For Hashtable, this can save a GetHashCode call and N-time equals comparisons.
As the following example: Public idata GetItemByID (Guid ID)
{
Idata data1 = null;
if (this. Idtable.containskey (ID). ToString ())
{
Data1 = this. Idtable[id. ToString ()] as idata;
}
return data1;
}
In fact, a complete line of code can be completed: return This.idtable[id] as idata;
1.7.4 avoids two-time type conversions
Consider the following example, which contains two type conversions: if (obj is SomeType)
{
SomeType st = (SomeType) obj;
St. Sometypemethod ();
}
The more efficient approach is as follows: SomeType st = obj as SomeType;
if (St!= null)
{
St. Sometypemethod ();
}
1.8 Hashtable
Hashtable is an underlying collection type that is used very frequently. There are two factors that need to be understood to affect the efficiency of Hashtable: one is the hash code (GetHashCode method) and the other is equivalent comparison (Equals method). Hashtable first uses the hash code of the key to distribute the object to a different bucket, and then use the Equals method of the key in that particular bucket to find it.
A good hash code is the first factor, and ideally, each of the different keys has a different hash code. The Equals method is also important because the hash only needs to be done once, and the lookup key in the bucket may need to be done more than once. From the practical experience, when using Hashtable, the Equals method consumes more than half of the total.
The System.Object class provides a default GetHashCode implementation that uses the address of an object in memory as the hash code. We have encountered an example of caching an object with Hashtable, each time constructing a Expressionlist object based on the passed OQL expression, and then invoking the Querycompiler method to compile the CompiledQuery object. The Expressionlist object and the CompiledQuery object are stored in the Hashtable as key value pairs. The Expressionlist object has no overloaded GetHashCode implementations, and its superclass ArrayList is not available, so the GetHashCode implementation of the System.Object class is finally used. Since the Expressionlist object will be constructed each time, its hashcode is different each time, so this compiledquerycache does not have the desired effect at all. This small omission has brought about a significant performance problem, due to frequent parsing oql expressions, resulting in compiledquerycache growing, resulting in server memory leaks! The easiest way to solve this problem is to provide a constant implementation, such as a hash code of constant 0. While this can cause all objects to converge into the same bucket, it is inefficient, but at least it solves the memory leak problem. Ultimately, of course, an efficient gethashcode approach will be achieved.
The above introduction of these hashtable mechanism, mainly hope you understand: If you use Hashtable, you should check whether the object provides the appropriate GetHashCode and Equals method implementation. Otherwise, there is a risk of inefficiency or inconsistency with expected behaviour.
2. Ado.net
2.1 Some thinking principles of applying ado.net
1. Design the data access layer based on how the data is used
2. Caching data to avoid unnecessary operations
3. Connect using the service account
4. Apply if necessary and release as soon as possible
5. Close a resource that can be closed
6. Reduce round-trip
7. Return only the data you need
8. Select the appropriate transaction type
9. Using Stored Procedures
2.2 Connection
A database connection is a shared resource, and the overhead of opening and closing is large. Ado.net The connection pooling mechanism is enabled by default, shutting down the connection does not really turn off the physical connection, but simply puts the connection back into the connection pool. Because the shared connection resources in the pool are always limited, if the connection is not closed as soon as the connection is used, it is possible that the thread that is requesting the connection is blocked, affecting the performance of the entire system.
2.2.1 Open and close a connection in a method
This principle has several layers of meaning:
1. The main objective is to apply and release as soon as necessary
2. Do not open the connection in the constructor of the class, and release the connection in the destructor. Because this will depend on garbage collection, and garbage collection is only affected by memory, the recovery time is uncertain
3. Do not pass the connection between the methods, which often causes the connection to remain open for too long
Here's the harm of passing connections between methods: Once a test case was encountered in a stress test, and when the number of users was increased, the case took up all the connections in the connection pool long before other cases. After analysis, it is because a method passes an open connection to the B method, while the B method invokes a C method that opens and closes the connection itself. During the whole operation of a method, it requires at least two connections to be able to work successfully, and one of the connections takes a very long time, so the connection pool resource is strained, which affects the scalability of the entire system!
2.2.2 Explicitly close a connection
The connection object itself can be shut down while garbage collection, and relying on garbage collection is a bad strategy. It is recommended that you explicitly close a connection using a using statement, as in the following example: using (SqlConnection conn = new SqlConnection (connstring))
{
Conn. Open ();
}//Dispose is automatically called on the conn variable here
2.2.3 Ensure connection pooling is enabled
Ado.net is a pool of connections for each of the different connection strings, so you should make sure that the connection string does not appear with specific user information. Also, be aware that the connection string is case sensitive.
2.2.4 Do not cache connections
For example, the connection is cached in session or application. This practice does not make sense when connection pooling is enabled.
2.3 Command
2.3.1 Use ExecuteScalar and ExecuteNonQuery
If you want to return a single value like count (*), Sum (price), or AVG (Quantity), you can use the ExecuteScalar method. ExecuteScalar returns the value of the first column in the first row and returns the result set as a scalar value. Because it can be done in a single step, executescalar not only simplifies the code, but also improves performance.
Use ExecuteNonQuery when using SQL statements that do not return rows, such as modifying data (INSERT, update, or delete) or returning only output parameters or return values. This avoids any unnecessary processing that is used to create an empty DataReader.
2.3.2 Use Prepare
When you need to repeatedly execute the same SQL statement multiple times, consider using the prepare method to improve efficiency. It is important to note that if only once or two times, it is completely unnecessary. For example:
Cmd.commandtext = "INSERT into Table1 (Col1, Col2) VALUES (@val1, @val2)";
The SQL statement needs to be compiled into an execution plan before it is executed. If you use a binding variable, the execution plan can be reused for subsequent execution of the SQL statement. If the parameters are merged into the SQL statement directly, the execution plan will be difficult to reuse because of the changeable parameter values. For example, as shown in the Prepare section above, if you write the parameter values directly into the INSERT statement, the four calls above will need to compile four execution plans.
In order to avoid the performance loss caused by this situation, the binding variable method is required.
2.4 DataReader
DataReader is best suited to access a read-only one-way dataset. Unlike a dataset, the dataset is not all in memory, but the data block that transmits a data buffer size from the data source once the data buffer is found to be read by a continuous read request. In addition, the DataReader remains connected and the dataset is disconnected from the connection.
2.4.1 Explicitly close DataReader
Like a connection, you also need to explicitly close the DataReader. Also, consider using the ExecuteReader (commandbehavior.closeconnection) method of the Command object if the connection associated with DataReader is only for DataReader services. This ensures that when the DataReader is closed, the connection is automatically turned off at the same time.
2.4.2 Access Property with index number access instead of name index number
Accessing a column property from the row uses an index number that is slightly more subtle than the way you use the name. If you are called frequently, such as in a loop, you can consider such optimizations. Examples are as follows:
Cmd.commandtext = "Select Col1, Col2 from Table1";
SqlDataReader dr = cmd. ExecuteReader ();
int col1 = Dr. GetOrdinal ("Col1");
int col2 = Dr. GetOrdinal ("Col2");
while (Dr. Read ())
{
Console.WriteLine (Dr[col1] + "_" + dr[col2]);
}
2.4.3 using a typed method to access a property
Accessing a column property from the row, using GetString, GetInt32 This explicit method of specifying the type, has a slightly higher efficiency than the general GetValue method because no type conversion is required.
2.4.4 Using multiple datasets
Some scenarios can consider a return of multiple data sets to reduce network interaction times and improve efficiency. Examples are as follows:
Cmd.commandtext = "StoredProcedureName"; The stored procedure returns multiple result sets.
SqlDataReader dr = cmd. ExecuteReader ();
while (Dr.read ())
Read the Set
Dr. NextResult ();
while (Dr.read ())
//
2.5 DataSet
2.5.1 use indexes to speed up the search row efficiency
If you need to find rows repeatedly, it is recommended that you increase the index. There are two ways of doing this:
1. Setting the PrimaryKey of a DataTable
Applies to finding rows by pressing PrimaryKey. Note the DataTable.Rows.Find method should be called at this time, and the commonly used select method cannot take advantage of the index.
2. Use of DataView
Applies to finding rows by pressing Non-primarykey. A DataView can be created for the DataTable and indexed by the SortOrder parameter instructions. Then use Find or findrows to locate the row.
3.1 Reduced round trip (reduce Round Trips)
Use the following methods to reduce the round trip between the Web server and the browser:
1. Enable caching for browser
If the rendered content is static or has a long change period, you should enable the browser cache to avoid issuing redundant HTTP requests.
2. Buffered page output
If possible, buffer the page output as much as possible, and then transfer it to the client once it is finished, which avoids multiple network interactions caused by frequent delivery of small pieces of content. Because this way the client cannot see the page content until the page is processed, consider using the Response.Flush method if the size of a page is larger. This method forces output to date in the buffer, you should use a reasonable algorithm to control the number of calls to the Response.Flush method.
3. Use Server.Transfer redirection request
Using the Server.Transfer method to redirect requests is superior to the Response.Redirect method. The reason is that Response.Redirect sends a response header back to Broswer, indicates the redirected URL in the response header, and then brower the request again using the new URL. The Server.Transfer approach is a straightforward server-side call, without these overhead!
Note that Server.Transfer has limitations: first, it skips security checks; second, it applies only to jumps between pages within the same Web application.
3.2 Avoid blocking and long hours of work
If you need to run a blocking or long-running operation, consider using the mechanism of an asynchronous invocation so that the Web server can continue to process other requests.
1. Invoking Web services and remote objects in an asynchronous manner
Whenever possible, avoid synchronous calls to Web services and remote objects during the processing of a request, because it occupies a worker thread in the asp.net thread pool, which directly affects the Web server's ability to respond to other requests.
2. Consider adding a OneWay property to a method that does not require a Web method or remote object to return a value
This pattern allows the Web server to return immediately after it is invoked. The use of this method can be determined according to the actual situation.
3. Working with work queues
Commit the job to a work queue on the server. The client polls the execution result of the job by sending a request.
3.3 Using caching
Caching can largely determine the ultimate performance of asp.net applications. asp.net supports page output caching and page partial caching, and provides cache APIs for applications to cache their own data. Use caching to consider the following points:
1. Identify data that is expensive to create and access
2. Assessing variability in the need for cached data
3. Frequency of use of assessment data
4. The data will be cached in the volatile data and invariant data separation, only to cache unchanged data
5. Select the appropriate caching mechanism (in addition to ASP.net cache, application state and session state can also be used as caching)
More than 3.4 threads
1. Avoid creating threads during request processing
Creating a thread during the execution of a request is a costly operation that can severely affect the performance of the Web server. If subsequent operations must be thread-complete, it is recommended that thread pool be used to create/manage threads.
2. Do not rely on thread data slots or thread static variables
Because the thread executing the request is a worker thread in the asp.net thread pool, two requests from the same client are not necessarily handled by the same thread.
3. Avoid blocking threads that handle requests
Refer to the "avoid blocking and long-running Jobs" section.
4. Avoid asynchronous calls
This is similar to the 1 situation. An asynchronous call can cause a new thread to be created, increasing the burden on the server. Therefore, do not perform an asynchronous call if no concurrent jobs are to be executed.
3.5 system resources
1. Consider implementing a resource pool to improve performance
2. Explicitly call Dispose or close to free system resources
3. Do not cache or consume resources from a resource pool for long periods of time
4. Apply as late as possible and release as early as possible
3.6 Page Processing
1. Reduce the page size as much as possible
This includes shortening the name of the control, the name of the CSS class, removing unnecessary blank lines and spaces, disabling unwanted viewstate
2. Enable the page output of buffers (buffer)
If the mechanism of buffer is turned off, you can open it in the following way.
To open the page output cache using a program:
Response.bufferoutput = true;
Use the @page switch to open the page output buffering mechanism:
<%@ Page Buffer = "true"%>
Use the <pages> node of the web.config or Machine.config configuration file:
<pages buffer= "true" ...>
3. Optimize page output with Page.IsPostBack
4. Improve cache efficiency and reduce rendering time by separating different content of the page
5. Optimize the complex and costly cycle
6. Reasonable use of the client's computing resources, to transfer some operations to the client
3.7 ViewState
ViewState is a mechanism designed by ASP.net to track state information between page postbacks for server-side controls.
1. Close ViewState
If you do not need to track the page state, such as when the page does not return (postback), do not need to handle the server-side control event, or recalculate the contents of the control every time the page is refreshed, you do not need to use ViewState to record the page state. You can set the EnableViewState property for a specific WebControl, or you can set it at the page level:
<%@ Page enableviewstate= "false"%>
2. Initialize control properties at the appropriate point in time
Asp. NET controls are not tracked for changes that are set during the execution of constructors, initialization, and after the initialization phase, modifications to the properties are tracked and eventually recorded in the __viewstate of the IE page. Therefore, the choice of reasonable initialization control properties of the execution point, can effectively reduce the page size.
3. Carefully choose the content to be placed in the ViewState
The content placed in the ViewState is serialized/deserialized, ASP.net is optimized for serialization of basic types such as String, Integer, Boolean, and if array, ArrayList, Hashtable storage is more efficient for basic types , but other types need to provide a type converter (kind Converter), or you will use a costly binary serializer.
Basic principles of 4.1 JScript performance optimization
1. Reduce the number of executions as little as possible. After all, for the interpretation of language, every step of execution requires an interaction with the interpretation engine.
2. Use the built-in features of the language as much as possible, such as String links.
3. Use the API provided by the system as much as possible to optimize. Because these APIs are compiled binary code, execution is highly efficient.
4. Write the most correct code. Fault tolerance comes at a performance cost.
Javascript:
Optimization of the 4.2 JScript language itself
4.2.1 variable
1. Use local variables as much as possible.
Because the global variable is actually a member of the global object, and local variables are defined on the stack, priority lookup is a high performance relative to global variables.
2. Try to define variables and assignments in one statement.
3. Omit the unnecessary variable definition.
If the definition of a variable can be overridden by a constant, the constant is used directly.
4. Use object syntax to assign values to objects.
The assignment syntax for object is more efficient when manipulating complex objects.
For example, you can use the following code:
Car = new Object ();
Car.make = "Honda";
Car.model = "Civic";
car.transmission = "Manual";
Car.miles = 100000;
Car.condition = "needs work";
Replace with:
Car = {
Make: "Honda",
Model: "Civic",
Transmission: "Manual",
miles:100000,
Condition: "Needs work"
}
4.2.2 Object Caching
1. The intermediate result of the cached object lookup.
Because of the interpretation of JavaScript, so a.b.c.d.e, you need to do at least 4 query operations, first check A and then check a B, and then check C in B, so down. So if this expression repeats itself, as long as possible, you should try to minimize the expression, you can use the local variable, put it into a temporary place to query.
2. Objects with long cache creation time.
Custom advanced objects and date, RegExp objects can consume a lot of time when they are constructed. If you can reuse, you should use the caching method.
4.2.3 String manipulation
1. Use "+ +" to append strings, use "+" to connect strings.
If you are appending a string, it is best to use the S+=ANOTHERSTR action instead of using S=S+ANOTHERSTR.
If you want to connect multiple strings, you should use "+", such as:
S+=a;
S+=b;
S+=c;
should be written
S+=a + B + C;
2. To connect a large number of strings, use the Join method of the array.
If you are collecting strings, it is best to use the JavaScript array cache, and then connect using the Join method as follows:
var buf = new Array ();
for (var i = 0; i < i++)
{
Buf.push (i.ToString ());
}
var all = Buf.join ("");
4.2.4 Type Conversions
1. Use Math.floor () or Math.Round () to convert floating-point numbers to integral types.
Floating-point numbers are converted to integers, which is more prone to error, and many people prefer to use parseint (), in fact parseint () is used to convert strings to numbers rather than floating-point and integer conversions, and we should use Math.floor () or Math.Round ().
The problems in object lookup are different, and math is an internal object, so Math.floor () actually does not have much of a query method and time to call, and the speed is the fastest.
2. Custom objects, recommended definitions, and use the ToString () method for type conversions.
For custom objects, the explicit invocation of ToString () is recommended if the ToString () method is defined for type conversion. Because internal operations attempt to convert the object's ToString () method to string after attempting all possibilities, it is more efficient to call this method directly.
Optimization of 4.2.5 Cycle
1. Use for as little as possible for the loop.
In JavaScript, we can use a for (;;), a while (), for (in) three loops, in fact, the efficiency of the three loops is very poor because he needs to query the hash key, as long as it can be used as little as possible.
2. Calculate the length of the collection in advance.
Such as: Will for (var i = 0; i < collection.length; i++)
Replace with: for (var i = 0, len = collection.length i < len; i++)
The effect will be better, especially in the circulation.
3. Minimize the operation within the cycle.
Each operation within the loop is magnified to a multiple of the number of loops. Therefore, the small improvement in the big cycle, the overall performance of the upgrade is considerable.
4. Use loops instead of recursion.
Recursion is less efficient than loops. The advantage of recursion is that it is more natural in form. So, without affecting the maintenance of the code under the premise of the loop instead of recursion.
4.2.6 other aspects
1. Use the language built-in grammar as much as possible.
"var arr = [...];" and "var arr = new Array (...);" is equivalent, but the former is more effective than the latter. Similarly, "var foo = {};" The same way as "var foo = new Object ();" Quick; "var reg =/.. /;" Faster than "Var reg=new RegExp ()".
2. Try not to use eval.
Using eval, which is equivalent to invoking the interpretation engine at run time, explains the running of the incoming content and consumes a lot of time.
3. Use prototype instead of closure.
Using closure is bad for both performance and memory consumption. This can become a problem if the closure is used too much. So, try to:
This.methodfoo = function ()
Replace with:
MyClass.protoype.methodFoo = function ()
and closure exist in the object instance, prototype exist in the class and are shared by all object instances of the class.
4. Avoid using the WITH statement.
The WITH statement temporarily expands the scope of the object lookup, saving text entry time, but giving more execution time. Because each given name is looked up in the global scope. So, you can put the following code:
With (Document.formname)
{
Field1.value = "one";
Field2.value = "two";
}
Change to:
var form = Document.formname;
Form.field1.value = "one";
Form.field2.value = "two";
4.3 DOM related
4.3.1 Create a DOM node
Compare document.write to generate content for a page, find a container element (such as specifying a div or span) and set their innerhtml more efficient.
Setting innerHTML is more efficient than creating nodes through the CreateElement method. In fact, setting the innerHTML of elements is one of the most efficient ways to create nodes.
If you must use the CreateElement method, and if there are ready-made boilerplate nodes in the document, you should use the CloneNode () method. Because after using the createelement () method, you need to set the attributes of multiple elements, and use CloneNode () to reduce the number of times the property is set. Similarly, if you need to create many elements, you should first prepare a boilerplate node.
4.3.2 offline operation of large dom trees
When you add a complex DOM tree, you can construct it and then add it to the appropriate node of the DOM number before it is finished. This saves the time that the interface refreshes.
Similarly, when you are ready to edit a complex tree, you can delete the tree from the DOM tree, and then add it back when the edit is finished.
4.3.3 Object Query
Using the ["] query is faster than. Item (). Call. Item () Adds a call to a query and function.
4.3.4 Timer
If you are working with code that is running, you should not use settimeout instead of setinterval. SetTimeout each time you want to reset a timer.
4.4 Other
1. reduce file size as much as possible.
Removing blank lines, spaces, and comments that are not relevant in a JScript file can help reduce the size of the JS file and increase the download time. (You can support code publishing with tools)
2. Try not to reference both JScript and VBScript engines in the same page
3. Move JScript in the page into a separate JS file.
4. Placing JScript within the page at the bottom of the page helps to increase the response speed of the page.
5. Reduce the number of downloads of JScript files using cache
6. When writing the URL of a JScript file within HTML, be aware of uniform capitalization. This allows you to take advantage of files cached by the previous URL.
7. It is recommended that you use JScript lint to check JavaScript code. After all, JScript code, the easiest to understand for a JScript engine, performs the most efficiently.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.