Brief introduction
There are a lot of "secrets" that you can dig up, and when you find them, it will give you a huge boost to your site's performance and scalability! For example, for membership and the profile provider, there are some secret bottlenecks that can be easily resolved, making authentication and authorization faster. In addition, ASP. NET HTTP pipelines can be processed for each request, preventing some unnecessary code from being attacked. Not only that, the ASP. NET work process can break through the default limits, thereby fully exerting its power. Page fragment output caching on the browser side (not on the server side) can significantly reduce the amount of download time required to access the request. Loading on the required user interface can bring a fast and smooth experience to your site. Finally, the Content distribution network (CDN) and the correct use of the HTTP cache header will enable your site to respond quickly.
In this article, you'll learn about these techniques to bring a huge boost to your ASP. NET application performance and scalability. Here are the techniques that will be discussed next:
- ASP. NET Pipeline Optimization
- ASP. NET Process Configuration optimization
- What you have to do before the ASP online site goes live
- Content distribution Network (CDN)
- Browser-Side Caching Ajax calls
- Maximizing the correct use of browser caching
- Load incrementally on the required UI to provide a fast and smooth experience
- Optimizing an ASP. NET 2.0Profile Provider
- How to query the Membership table for ASP. NET 2.0 without "Downline" site
- Prevent denial of service attacks
All of the above technologies can be implemented in any ASP. NET site, especially those using the membership and profile providers of ASP.
ASP. NET Pipeline Optimization
Some of the default httpmodules for ASP. NET are set in the request pipeline, and they participate in each request. For example, SessionStateModule will process each request, convert the session cookie and then load the proper session to HttpContext. Not all of these modules are always needed. For example, if you do not use membership and profile provider, you do not need the FormsAuthentication module. If you do not use Windows authentication for your users, you do not need to windowsauthentication the module. These modules are simply placed within the pipeline, executing certain code that is not required for each request.
These default modules are defined in the Machine.config file (located in the $windows$\microsoft.net\framework\ $VERSION $\config directory)
You can remove these default modules from your Web application by joining the <remove> node in the. config file.
The above configuration is ideal for sites that use the database and are based on forms authentication and do not require any session support. Therefore, all of the above modules can be safely removed.
ASP. NET Process Configuration optimization
The ASP. NET process model configuration defines some process-level properties, such as how many threads are used by ASP, how long it will block threads before timing out, how many requests to wait for I/O work to complete, and so on. These default configurations are too restrictive in many cases. Now that hardware has become quite cheap, Gigabyte's dual-core RAM servers have become a very common choice. Therefore, the process model configuration can be configured to allow the ASP. NET threads to use more system resources and provide better extensibility for each server.
A typical ASP. NET installation will create a machine.config as follows:
You need to modify this automatic configuration to use some special values for different properties to customize the work of the ASP. For example:
In addition to some of the following values, the other values are the default values:
- maxworkerthreads-20 is the default value for each process. On a dual-core computer, 40 threads will be assigned to ASP. This means that an ASP. NET one dual-core machine was able to process 40 requests concurrently at a time. I increased it to 100, giving ASP. NET more threads per process. If you have a non-CPU-intensive application that can easily handle more requests, then you can increase the value. Especially if your Web application uses many Web service calls or downloads/uploads a lot of data without putting a strain on the CPU. When ASP. NET runs more than the number of worker threads it allows, it will stop processing more incoming requests. The request is placed in a queue and remains waiting to know that a worker thread is freed. This is usually the case when your site begins to suffer more attacks than you expect. If this is the case, if your CPU is idle, increase the number of worker threads for each process (ASP. NET process).
- maxiothreads-20 is the default value for each process. On a dual-core computer, ASP 40 threads will be assigned to the I/O operation. This means that on a dual-core server, ASP. 40 I/O requests can be processed in parallel at one time. I/O requests may be read-write to the file, database operations, HTTP requests generated internally by Web service calls, and so on. So, you can set it to 100 if your server has enough system resources to handle more I/O requests. In particular, when your Web application frequently downloads or uploads data and calls many external webservice in parallel, efficiency gains can be noticeable.
- minworkerthreads-when the number of available ASP. NET worker threads drops below this value, ASP. NET begins to press the incoming request into the queue. So you can set the value to a very low number to increase the number of current requests that can be processed. However, do not set it too low because the Web application code may need to do some background processing and some parallel processing requires a certain number of idle worker threads.
- Miniothreads-is the same as minworkerthreads, but this value involves the number of I/O threads. However, you can set it to a lower number than minworkerthreads because there is no parallel processing on the I/O thread.
- momorylimit-Specifies the maximum allowable memory size, which is the percentage of total system memory. It is the amount of memory that the worker process can consume before starting a new process and reassigning the request that is being processed. If only your Web application is allowed in a "dedicated box" and no other process requires RAM, you can set it to a very high value, such as 80. However, if you have a memory leak application that always leaks memory, it's a good idea to set it to a lower value so that the process of leaking memory can be completely recycled as soon as it becomes impossible to process. In particular, this can happen when you are using a COM component and it generates a memory leak.
In addition to the processmodel node, there is also a very important node--system.net, which you can specify to an IP maximum that it can send out the number of requests.
The default value is 2, which is too low. This means that you cannot build more than two simultaneous connections from your Web application to an IP. Sites that need to get external content are largely constrained by this default configuration. Here I set it to 100, and if your Web application needs a lot of calls for a particular server, you might want to consider setting this to a higher value.
What you have to do before the ASP online site goes live
If you are using the membership Provider for ASP. NET 2.0, you should make some adjustments to your Web. config:
- Add the ApplicationName attribute to the ProfileProvider. If you do not include a special name, ProfileProvider will use a GUID. So, on your local machine, you will have a GUID and another GUID on the published server. If you copy your local database to the publisher, you will not be able to reuse the records that exist in your local database, and ASP. NET will create a new application at the publisher. You need to add it here:
- Whenever a page request is completed, ProfileProvider will automatically save the profile. So, this could lead to an unnecessary update of your database, it has a significant performance penalty! Turn off auto-save and use Profile.save () in your code to do it explicitly.
- Role Manager always queries the database to get the user's role. This also has significant performance losses. You can avoid it by having role Manager cache character information in a cookie. But it will also allocate almost 2KB of cookies to users who do not have many roles, but this is not a common scenario. Therefore, you can safely store role information in a cookie.
The top three settings are primarily for high-traffic sites.
Content distribution Network
Every request from the browser arrives at your server through a backbone network that spans the world. Requests require a certain number of countries, continents, and oceans to pass to your server, so it becomes slow. For example, if you put your server in the USA and some people are browsing your site in Australia, each request will go from one end of the earth to another to reach your server and back to the browser. If your site has a large number of static files, like pictures, CSS, Javascript. Sending requests for them to download them across the world will take a lot of time. If you can set up a server in Australia and redirect users to your servers in Australia, each of these requests will take less time to reach the United States. Not only will network latency be smaller, but the routing of data transfers will also be faster, so static content will be able to be downloaded more quickly.
Note: Because this section is not related to the ASP. NET, it involves the network planning, so no discussion, just give a picture:
Caching Ajax calls on the browser
The browser can cache images, JS files, CSS files to the hard disk. It can also cache XML HTTP calls if the call is HTTP GET. The cache is URL-based. If the same URL is requested again, it is cached to the computer, and then the response is loaded from the cache instead of from the server. Basically, the browser can cache any HttpGet request calls and return cached data based on that URL. If you make an XML HTTP request like a HttpGet request, and the server returns some special response headers, instruct the browser to cache the response data. The response will return data directly from the cache when it is called again in the future. This saves network round-trip latency and download time.
We cache the user's state so that when the user accesses the site again on the next few days, the user gets the cached page directly from the browser's cache, rather than from the server. So the second load will become very fast. We also cache certain parts of the page, depending on the user's actions. When the user performs the same operation again, a cached result is loaded directly from the local cache, thus eliminating the time of the network round trip.
If you return the expires header during the requested response, the browser caches the XML HTTP response. There are two response headers you need to have the browser cache the response data as the response returns:
This instructs the browser to cache the response data until January 2030. Although you initiate the same XML HTTP request with the same parameters, you will only get the cached response data from your local computer. There is, of course, a more sensible way to control browser caching. For example, some request headers indicate that the browser caches for 60 seconds, but after 60 seconds the server is reconnected and a refreshed data is obtained. When the browser local cache exceeds 60 seconds, it also prevents the proxy from returning cached response data.
Let's use an ASP. NET Web service tune to output such a response header.
This causes the request header to become the following form:
The expires head was set correctly. But the decision is still cache-control. You can see that max-age is set to 0, which will prevent the browser from doing any kind of caching. If you are sure you want to prevent caching, you should set such a Cache-control header. It looks as if things are happening in real time.
The output, like the following, does not have a cache:
ASP. NET 2.0 has a bug--you can't change max-age head. Because Max-age is set to 0,asp.net 2.0, the Cache-control is set to private. Because max-age=0 means you don't need a cache. Therefore, there is no way for ASP. NET 2.0 to return the appropriate header that cached the response. This is due to the ASP. NET Ajax Framework it can intercept calls to Web services and set Max-age to 0 by default, before making a request.
"Hacker" Time to come! After I decompile the source code for the HttpCachePolicy class (the class that the Context.Response.Cache object belongs to), I find the following:
Anyway, This._maxage will be set to 0, and check--if (!this._ismaxageset | | (Delta < This._maxage)) Also sets the organization to a larger value. For this reason, we need to bypass the Setmaxage method and set the value of _maxage directly. Of course, this requires a "reflection" mechanism:
This returns the following header:
Now, Max-age is set to 60, so the browser will cache the response for 60 seconds. If you make the same request again within 60 seconds. It will return the same response. Here is an output test that shows the date/time returned from the server:
After one minute, the cache fails and the browser sends a request to the server again. The client code is as follows:
There is another problem to be solved: in Web. config, you will see that ASP. NET AJAX will see:
This will prevent us from setting the _maxage object of the response object because it requires reflection. So you will have to delete this setting, or change the value to: full.
Not finished, to be continued ...
Original Address http://blog.csdn.net/yanghua_kobe/article/details/6850119
10 performance or extensibility Secrets of ASP (i) to the network