Discussion on the Design of ASP. NET Website high performance and multi-concurrency

Source: Internet
Author: User
My opinion on how to improve the performance of applications (Internet applications or enterprise-level applications) has always been a core concern: Io processing. Because I think the current CPU processing capability is already very high, the Code processed in the memory normally written is not too serious and will not have a great impact on the CPU, performance is often limited by Io. Since my team and I have been communicating for a long time, a simple Io Description between us often covers a lot of meanings, these Io operations include disk Io, network Io, memory Io, and Io processing of various devices. Our team experience is to try to find out the efficiency that can be improved in various Io processes.

Below, I will explain from the back to the front about our team's experience and understanding in improving Io processing.

1 Database
The database is the most obvious component that consumes disk I/O and improves data performance. SQL statements are well written, which also reduces table scanning (obviously I/O actions ), A well-designed index improves the IO processing capability and reduces the complexity of Io processing by storing non-changing historical data independently, redundant fields are designed for tables to reduce Io read/write performance and improve I/O efficiency by distributing data tables on different disks. There are other methods, such as querying cache and connection pool, which are the same principle.

In short, reducing excessive activity between databases and disks can improve database efficiency as much as possible.

2. Data Cache
The processing efficiency of memory I/O is naturally much higher than that of disk I/O. data caching is to reduce disk operations or at least reduce database operations with lower performance. For the result data cache on the page, we usually use two cache zones: one memory and one file.
Memory Cache, we use httpruntime directly. cache: In this cache area, we place the signature and data (data is usually the data required on the page, generally in JSON format). In terms of expiration policies, we naturally choose noabsoluteexpiration.
When the data needs to be removed from the memory cache, We will process the expired data again. We have a set in the cache, which contains the removed cache data signature, the corresponding data is written to a file on the disk.
When a user requests data, first check whether the signature is in the normal cache. If not, check whether the signature is in the expired zone. If the signature is in the expired zone, if no disk file is read (at least the database overhead is reduced), check the database.

3. process the collection code
Whether it is page JavaScript or background Java, C #, the current business operations on the set/array is certainly the most frequent, consider using some details of optimization, it can also improve performance
[CSHARP] view plaincopyprint?

  1. Int [] arr = {1, 3, 6, 7, 3, 6, 7, 3, 5 };
  2. For (INT I = 0, max = arr. length; I <Max; I ++)
  3. {
  4. System. Console. writeline (ARR [I]);
  5. }
Int [] arr = {1, 3, 6, 7, 3, 6, 7, 3, 5}; </P> <p> for (INT I = 0, max = arr. length; I <Max; I ++) <br/>{< br/> system. console. writeline (ARR [I]); <br/>}

Similar to many techniques, it is helpful to reduce the repeated validation of the length/count of a set for high-frequency set operations. Of course, data cannot be dynamically added or subtracted from a collection. Array first, generic second, and arraylist last considered. The reason for these selection is to reduce Io overhead.
There are also a lot of code details that can improve efficiency, such as the cognition of string or something.
(Lin Yongjian's MVP reminds me that I didn't express the above meaning clearly. My idea is: set my test to be slow if I judge count repeatedly, if it is better to use for, calculate the count first, and use an array as much as possible, because the array has been assigned a value during initialization and is of a strong type. I don't know if my expression is correct)

4. Network Transmission
The background data is finally transmitted to the browser. Reducing the bytes transmitted over the network is also the key to improving the throughput. Simply put, the network I/O processing is optimized. Reduce viewstate information in webform, or simply use MVC instead of webform, or directly use httpcontext to control all state information. We use ashx and open different ashx channels for different services to improve performance. Because ashx does not need to perform a series of actions, does not need to process a series of events, and manages a lot of control states (loading and parsing viewstate, restoring and updating control values, and saving viewstate ), directly returning operation results does not consume more server resources, and the returned format is flexible. We use ashx very well in document-based websites.
In addition, ashx is very well isolated from developers.
In addition to the impact of programming on transmission, images and CSS files required for pages, and reasonable processing of JS files to reduce HTTP requests can also improve network I/O efficiency: for example, merging images, simple methods such as JS and CSS compression do not change much, but it is always good to reduce the pressure on the server during concurrency.

5 page rendering and experience
Optimize the HTML structure of the page. Sometimes, in order to accelerate rendering, you do not have to fully comply with W3C specifications, reduce Div nesting, and use fixed width. The main JavaScript details can improve the experience. My tests in chrome show that the network speed is much higher than the rendering speed in many cases, so it can improve page processing and be very effective for individual users.

4. Data submission
Asynchronous mode or multiple threads are considered in reliable scenarios. The asynchronous model can be used for database submission and Web Service Access. Of course, it is reliable.
Page Ajax is also an Asynchronous Method, and JS file loading can also be asynchronous.

5. The lock was too tired. I did not write down Lin Yongjian's MVP and proposed using nosql. Well, yes, but I have not used it well. I can't explain anything.

This article goes to: http://blog.csdn.net/shyleoking/article/details/7277898

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.