I would like to explain the following content. If you are a wealthy Internet company or a non-short-of-money owner who provides services to wealthy customers, please immediately bypass. The following content is not suitable for you.
The following content is a discussion by programmers who want to buy various advanced services, because the customer's computing resources are short and the budget is short. They cannot increase the bandwidth and increase the number of servers.
Thank you.
For how to improve the performance of applications, whether it is Internet applications or enterprise applications, my point of view has always been to consider a core: IO processing. Because I think the current CPU processing capability is already very high, the Code processed in the memory normally written is not too serious and will not have a great impact on the CPU, performance is often limited by IO. Since my team and I have been communicating for a long time, a simple IO Description between us often covers a lot of meanings, these IO operations include disk IO, network IO, memory IO, and IO processing of various devices. Our team experience is to try to find out the efficiency that can be improved in various IO processes.
For how to improve the performance of applications, whether it is Internet applications or enterprise applications, my point of view has always been to consider a core: IO processing. Because I think the current CPU processing capability is already very high, the Code processed in the memory normally written is not too serious and will not have a great impact on the CPU, performance is often limited by IO. Since my team and I have been communicating for a long time, a simple IO Description between us often covers a lot of meanings, these IO operations include disk IO, network IO, memory IO, and IO processing of various devices. Our team experience is to try to find out the efficiency that can be improved in various IO processes.
Below, I will explain from the back to the front about our team's experience and understanding in improving IO processing.
1 Database
The database is the most obvious component that consumes disk I/O. There are many ways to improve data performance, and SQL statements are well written, which also reduces the number of I/O operations for table scanning ), A well-designed index improves the IO processing capability and reduces the complexity of IO processing by storing non-changing historical data independently, redundant fields are designed for tables to reduce IO read/write performance and improve I/O efficiency by distributing data tables on different disks. There are other methods, such as querying cache and connection pool, which are the same principle.
In short, reducing excessive activity between databases and disks can improve database efficiency as much as possible.
2. Data Cache
The processing efficiency of memory I/O is naturally much higher than that of disk I/O. data caching is to reduce disk operations or at least reduce database operations with lower performance. For the result data cache on the page, we usually use two cache zones: one memory and one file.
Memory Cache, we use HttpRuntime directly. cache: In this Cache area, we place the pattern and data, which is usually the data required by the page. Generally, we place the data in JSON format). In terms of expiration policies, we naturally choose NoAbsoluteExpiration.
When the data needs to be removed from the memory Cache, We will process the expired data again. We have a set in the Cache, which contains the removed Cache data signature, the corresponding data is written to a file on the disk.
When a user requests data, first check whether the signature is in the normal cache. If not, check whether the signature is in the expired zone. If the signature is in the expired zone, then read the disk file to reduce the database overhead at least). If no, check the database.
3. process the collection code
Whether it is page javaScript or background java, C #, the current business operations on the set/array is certainly the most frequent, consider using some details of optimization, it can also improve performance
- int[] arr = { 1, 3, 6, 7, 3, 6, 7, 3, 5 };
-
- for (int i = 0, max = arr.Length; i < max; i++)
- {
- System.Console.WriteLine(arr[i]);
- }
Similar to many techniques, it is helpful to reduce the repeated validation of the Length/Count of a set for high-frequency set operations. Of course, data cannot be dynamically added or subtracted from a collection. Array first, generic second, and arrayList last considered. The reason for these selection is to reduce IO overhead.
There are also a lot of code details that can improve efficiency, such as the cognition of string or something.
Lin Yongjian's MVP prompted me not to clearly express the above meaning. My idea is: set my test, if the count is judged repeatedly, it will be slow. If it is better to for, count first, in addition, we try to use arrays as much as possible, because the array has been assigned a value during initialization and is strongly typed. I don't know if my expression is correct)
4. Network Transmission
The background data is finally transmitted to the browser. Reducing the bytes transmitted over the network is also the key to improving the throughput. Simply put, the network I/O processing is optimized. Reduce ViewState information in webform, or simply use MVC instead of webform, or directly use httpcontext to control all state information. We use ashx and open different ashx channels for different services to improve performance. Because ashx does not need to perform a series of actions, does not need to process a series of events, and loads and parses ViewState for a large number of control status management tasks, such as restoring and updating control values and saving ViewState ), directly returning operation results does not consume more server resources, and the returned format is flexible. We use ashx very well in document-based websites.
In addition, ashx is very well isolated from developers.
In addition to the impact of programming on transmission, images and css files required for pages, and reasonable processing of js files to reduce HTTP requests can also improve network I/O efficiency: for example, merging images, simple methods such as js and css compression do not change much, but it is always good to reduce the pressure on the server during concurrency.
5 page rendering and experience
Optimize the html structure of the page. Sometimes, in order to accelerate rendering, you do not have to fully comply with W3C specifications, reduce div nesting, and use fixed width. The main javaScript details can improve the experience. My tests in chrome show that the network speed is much higher than the rendering speed in many cases, so it can improve page processing and be very effective for individual users.
4. Data submission
Asynchronous mode or multiple threads are considered in reliable scenarios. The asynchronous model can be used for database submission and web Service Access. Of course, it is reliable.
Page ajax is also an Asynchronous Method, and js file loading can also be asynchronous.
5 locks
I'm too tired to write it first.
Lin Yongjian's MVP also proposed using noSQL. Well, yes, but I am not using it well. I can't tell you anything.